doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/52179 (DOI)
Good morning. Can you hear me? Closer? Better? Okay. So that was weird, but things are blue now. I didn't know that I wanted to make it Halloween themed and colorful. I'm sorry, I messed up. The screen wanted to do it for me. So I'm going to talk about Unicode and why strings are kind of weird in Rust and using the Han characters as example. So my name is Jenny Manning and you can find me on Twitter. I'm in Pittsburgh and that's how I know a lot of people here. I realized after I made or after I had already chosen this talk that my Twitter handle is actually an ASCII slash Unicode joke. So I was like, oh, that's already thematically correct. So if we just get started with who has tried to index into a string in Rust? Yeah, exactly. And you get a string cannot be indexed error and you're like, why is this happening to me? This is the worst thing ever. Or saying, okay, well, what is the length of these things? How long do I expect this string of five letters to be? Cool, it's five. Perfect. But what if we start looking at something different and that is the Kanji for Rust? Thematic. So if we look at this, then it will say that this single character is three long, not one. And that has to do with the Unicode scalar value of the different lengths of things. So it doesn't always correlate. And so you need to always be saying dot char and counting them instead of saying that length unless you want to actually know the byte length. And these are just what those actual bytes look like in Unicode or in UTF 8. Sorry. And if you wanted to try and get the first value of this, it errors out because you're asking for something that's not a valid character on its own. That's what it would be returning if it returned a value is like 233, which just doesn't make sense. Why is this so complicated? Most of us just want to use a, oh, yeah, that looks somewhat normal. Most of us just use Latin-based languages all the time and just want the ease of ASCII. Why does Unicode have so many different characters? Why is it important? Why does Rust actually care about it? And a major reason actually goes way back to the second millennia BCE. In the Shang Dynasty, there was a thing called Oracle Bonescript, which the rulers decided to come up with to divine the future and came up with an entire writing system that if you want to hear more about, we can talk about it later outside of this to capture everything that there is you would want to capture about language. And you write them on a bone and then throw them in a fire and then where it cracks tells you things about the future. But this is what evolved into our classical Chinese. So if we look here, this is for like capital and this is an example. It started in Oracle Bonescript and then evolved and changed over time. And a major thing about this is also the fact that when this happened, the traditional Chinese at that step also spread to Korea, Vietnam, and Japan around that same time period. So we have all of the very similar symbols across all of these different cultures and languages. So the traditional one is what things usually look like in Korean and now it's mostly just Hangul that's used and not Hanja, but it's an old-fashioned way of doing it. And then modern Chinese has simplified a little bit. But these are the same, they have different pronunciations in all the different languages with the same meaning. Okay. So that's nice and all, but still, why can I not just use ASCII? Well, ASCII is kind of old. Like it was developed for telephone codes. It was published in the 60s and the first thing it was used for was a teleprinter code, which a teleprinter code, that's interesting. But so at least after that, here, let me make this a little bit bigger because it's difficult to see. Okay. Sorry. So that's why at least Unicode was thought of. Some people at Xerox and Apple decided to start trying to come up with an idea for a universal character set. So they started collaborating on that. And there's also a, there's some other details of that. But it was started in the 80s, in the late 80s, and then was published in the, the first version was published in the early 90s. And the entire point of it was to have a unique code point for each character. And for the second volume that they published eight months later, this included the Han unification, which is related to what I was saying a second ago and was very controversial for reasons. Oh, so I said it is a character, right? So, but what is actually a character? So there are, there's a graphime and a glyph. So a graphime is actually what we think of as like the letter A, number zero. It's an abstract, the smallest abstract unit for a writing system. But a glyph is what we see. It's the different fonts, how they look in every different font. And it's a specific shape that represents that. And that was a major point of Unicode. Unicode encodes the graphimes, not the glyphs. Okay. There we go. So when ASCII was created, it decided to unify some overlapping gifts that it saw. Like the apostrophe and the single quotation mark are just the same. They're not unique characters at all. But when Unicode was created, it was focusing a lot more on the graphimes over the glyphs. Because it wanted to try and have a lot more characters in it. So this leads into what I mentioned before, the Han Unification. So if we look at those characters that are in Chinese, Korean, Japanese, some used in Vietnam, that's a lot of characters. And some of them have very small alterations. Some of them have huge alterations. It very much varies. But when they were releasing the second edition, the second volume, they were only, they were said you can have 2,000 code points to get all of Asian languages in here and everything you need. So there had to be some creative thinking. So the concept of abstract character shape and actual character shape were defined and they're sort of graphime and glyphs. And they mapped multiple character sets into one code point across Chinese, Korean and Japanese. And they were not unified necessarily by their appearance, but by their meaning or definition. There are cases where you have the same character in Chinese and Japanese and it means something completely different. Those were given unique code points. So we go back to this one that we were talking about. All of these across Chinese, Japanese and Korean all have, this has one code point and it has a font on top of it. So like times new Romans is how we change the appearance of A. That's how a language, like that's how an entire culture is set. It can also be like a locale and things of that sort. But it's, so it ends up with some weird things happening and is a major reason why it's controversial. But for this case, they're all the same meaning and generally about the same. So it's just a little, maybe a little different looking, but they generally look the same. But there are also some ones where they look really, really different. So Korean uses the traditional one. A lot of modern Chinese uses the simplified one. And then Japanese has a totally different thing altogether. So all of these across these three different cultures, this is a case where they are really, really different and can be very confusing. For this specific case, those did end up being broken up into outside code points later because they're so different. But there are other cases where the shapes can be different across one language and they still have the same code point. So some different consequences of this is the fact that it's really difficult to render these in the same file. So it's difficult to have, like, to render all, it's hard to find examples even on the Internet because it's hard to have them on the same page. You have to actually take screenshots and such. It's interesting. But they're all the same Unicode value. So it's very confusing for people who want to talk about potentially the historical aspect of this is the traditional symbol that led into this because it's the same symbol and you're setting your font on that. And then you have it. It is very difficult. And then showing you can end up with the wrong variance very easily. And that can also mean if you copy and paste something and you have the wrong font set, then you might see the completely wrong thing. And this is a big reason why people were unhappy and felt that their culture and the differences between all of those different cultures were being kind of ignored and blended together. The distinctions didn't exist anymore. So it was able to change it from 100,000 characters and condense it into only 20,000 characters for the first edition, which is very useful because then future editions of more CGK characters were added and more allotments were added in the future. But it's very easy to end up with the wrong thing and you might end up with like kind of wonky looking things. So in Unicode, there is a million different spots that are actually available currently in Unicode that we allow for. And as of right now, 137,000 of these are actually assigned. 70,000 of those are for, and that's Chinese, Japanese, Korean, CGK unified characters. When they first said you only can have so many slots, they weren't planning to expand that much. This is what the Unicode code space looks like. Yeah, all of that white is unset. So they condensed everything down and said you can only have so many spots, but this also shows how much Unicode is going to expand and how many more code points we're going to end up with. Pretty much only the first two planes are used. Blue refers to them being used. And white is unused. And there are 17 planes across this. And there's a lot of code points in each one. So we actually just look a little bit into each of these, the ones that are actually used. So when they were first coming out with Unicode, this is the only plane they planned to use at all. They weren't really planning on going past this. Or very much past it. They weren't explaining or expanding that much. So the first plane has most of the commonly used characters. Can you guys read that, actually? Yeah? Okay. So all of the CGK characters were given, those first 20,000 slots they were given are all of the pink here. So before that, for the first edition of Unicode, it only had up to 33. And then all of those things were added later. So we keep adding more and more things to this. And this is the BMP. It's the first one. So then we've added more historical and symbols and notations. The supplementary multilingual plane is the next one. And it has, like, Egyptian hieroglyphics and less common things in it. But we're still, we can still expect to see these things. And then the second plane, which is the third plane, because indexing off of zero. This contains all of the additional, or it contains a lot of additional characters. When Unicode was first created, the second volume was added, there was, most family names weren't able to be written on the Internet. All of the different characters we talked about are like last names and make up people's names. So, and a lot of times still, people cannot, there's still many people who cannot have their name actually, it does not exist in Unicode. Like, there's just not a value for it. So they added a lot of names to the second plane to try and offset that. But it's another criticism of the Hanunification that they only went for commonly used words and that didn't support a lot of other aspects. Okay, so this is what, just from a coloring perspective, we want to see what that looks like just overall. But the Hanunification ends up being, is all of the teal. So that is still a large amount of what is said. And this is, oh, off of this, this is the usage. So this is how often these different things are used. So even though the third plane exists for family names, it's almost never used. So we kind of do care the most about ASCII. Like we do care the most about our Latin characters. And it's, this is how often each code point actually exists in the real world. And this is off of a sample of, the sampling comes from Wikipedia and Twitter. That's where the usage on the Internet comes from. So Unicode started with being able to be, there were a few different encodings that were tried. But the ones that ended up sticking were UTF-8, 16, and 32. And UTF-8 was actually invented at a diner by Ken Thompson and Rob Pike on a napkin in September of 1992. They were like, there's not a good way to solve this. And they were like, wait, I have an idea. And literally wrote it down on a napkin and a diner. And then, yeah. So that's an interesting thing about it. But the way that UTF-8 works actually is we have a limit that you can have four bytes. It was set in 2003. That is the most space UTF-8 characters can take up so that you can only expand so much. And for the most part, the one byte is ASCII characters. Two is things with accents and more Greek characters. And then the third is a lot of things in the Han unification. And fourth is mostly used for emojis and such right now. But the way that it works is you don't actually end up with all four bytes. You actually only end up with 21 unique spots for the fourth one. Because to tell it to even look forward to the next byte, it's either 10, 10, or, I mean, you can see. But that is how when looking forward, it knows what a char is. That's how it's easily able to look forward when parsing out. And UTF-16 has a similar thing, but it's only two versus four bytes. So. But a thing that is unusual about UTF-16 is that it has some surrogates. So that allows more pairings and is kind of was tacked on as an afterthought. But Rust uses UTF-8. So that's not. And Rust is chosen to by default enforce that all characters are UTF-8. And this is why we care about any of these things. Because Rust says if it's not valid to parse UTF-8, then I'm not going to even let you compile this. So it's saying instead of letting people shoot themselves in the foot like you can in C or Java or other languages that will let you index into a string and have a segfault or a runtime error or panic, Rust tries its best to not let you do that and tries to make it so that you have to think about UTF-8 problems up front and actually think about how to, why do we care about this? Yeah. So in Rust, how do I get character actors of a string? We're so used to this. You just can't. Rust doesn't allow it. You have to change it to something else. So some different best practices for Rust is that string slicing is dangerous because that is not a thing that is protected. You can't index and say give me the first byte or the second byte, but you can say give me the first four bytes. And if you don't have that end on a valid code point, then it will panic. So you can slice into your string and then cause Rust to panic. So it's much better to iterate over it using Chars. But this doesn't handle the actual grapheme clusters. You need to use a different crate for that. And you should also do normalize your strings before doing comparisons because the character for E with an accent can also be combined by having E and then a combining character and then an accent. And those aren't the same. If you try and compare those strings, Rust or most programming languages will not tell you that they are the same. They will say these are different. But if you normalize them, then it will combine them into the same thing and it will combine the diatrib. And there is a crate that I recommend for that. And I'm actually going to end here. That is cool with people. Thank you a lot for listening. I hope there was something useful.
Have you ever wondered why you can’t look up a character in a string by its index? Or why the length of a string can be wildly different from the number of characters in the string? In this talk, we’ll dive into Unicode by looking at how Kanji is represented in Rust. You’ll learn about things like the Han unification, the origins of CJK languages from Oracle bone script, and why Rust handles strings differently than we expect.
10.5446/52180 (DOI)
So, yeah, I'll be talking about my experiences introducing Rust into an existing embedded code base, a little bit of background about my experience. So I work for Lexmark. We manufacture laser printers. I've been there about 12 years now, so I actually started on the kernel team, which did occasionally work on the kernel, but mostly that was the team that got to deal with the problems that were too hard for other teams to debug. So thread A, stomp thread B stack, and team B is really confused what happened to their stack, so that goes to the kernel team. We help unravel it. So that was fun, but we got to deal with a lot of really memory on safety issues on that team, also performance or memory issues where somebody ran the system out of memory. Then spent some time on the network team and got to do a lot of protocol interpretation there, a lot of security sensitive operations there. And then spent about a year, 18 months on the build system team, where I helped to move the firmware team away from an old proprietary in-house build system over to a standard open source tool we now use BitBake from the Octo project. I'm also interested outside of work. I like retro computing, especially anything 6502 based Commodore 64, and when I get frustrated with computers, I like to go fly around. So our embedded systems are laser printers, and in particular for Lexmark, that means it's an ARM computer running Linux with some motors and lasers attached to it, which is kind of cool if you think about it that way. We do use a lot of free and open source software, so we've got Apache, some of our products use Android, we use SystemD to coordinate our boot, and Python. So if you're wondering what kind of an embedded system we have, that can scale from really big to really small, we're probably in the middle, we actually span a fairly decent range, so we are multi-core on all of our devices, either two-core or four-core, ARM A53s. On the low end, we'll have as little as 256 megabytes of RAM, and then that scales all the way up to four gigs of RAM on the big boxes, and similarly for code, as little as 128 megs of NAND, which is not that much for a Octo project in particular, all the way up to four gigabytes. If that seems too big and too much horsepower for you, then we are actually looking at using Rust on a 256-kilobyte RAM system, running the Zephyr OS, and I'll touch back on that here in a little bit. So in addition to the nice open source parts, there are unfortunately a lot of proprietary printer-specific parts, so that would be things like PostScript and PDF interpretation that actually describes the images that are going to be printed. We have a graphics engine for formatting and resizing and doing half toning and screening, and then there's the actual code that runs the lasers and does the mechanism control and all of that fun stuff. Some of this code is extremely old, and so it dates back to the late 80s, so it's written in C and C++ of course, but it's not just C, it's really scary bad C, the kind of C code where you pass devoid star all the time and then just cast back on the other side because you know what it is, or we actually still have some functions that are K and R style prototype where the compiler will literally let you pass whatever you want to anything. So as you might expect, we have a lot of the issues that such a code base would naturally have, memory leak, security issues, it's not a great situation. So the question was how can we start digging our way out of this technical debt, and we all know that Rust solves a lot of these problems, but the question is how do we introduce Rust without breaking the code that we have, and not only that, without breaking the people that we have, right? We have a lot of C developers that are comfortable with C, don't necessarily want to learn any new language, especially one with a reputation as being difficult to get used to. But you might also ask if you're not very familiar with the laser printer industry, why do we even need an operating system? You're just putting some toner dots on a piece of paper and running some motors, that seems to be easy. Well, here's the list of things that we had to deal with, I just filled up the slide until, came up with a thing until I filled up the slide, it didn't take very long, but we have to authenticate against Active Directory, Windows Domain, so that users can log into the printer. We have to be able to pull files from Windows file shares and push to Windows file shares. Some of our devices actually have a Java virtual machine where we run third-party code that's been developed so that you can have custom workflows and do lots of whatever third-party things want to do. RFID Badge Readers, again, for authentication, we even support the CAC standard, which is used by Department of Defense, it's a smart card standard, so lots of code goes into that. We are an Internet of Thing device, we were doing that really before it had a name, but we're your DHCP Ethernet Wi-Fi, we even have printers that can be directly connected with fiber, I don't know who wants to do that, but apparently somebody did. We have an embedded web server, so like I mentioned, we're running Apache, so you can hit port 80 or port 443 and get a control panel and settings and debug tools and all sorts of fun stuff. Airprint, Google Cloud Print, we can send emails, we can do optical character recognition on the printer, we have to deal with security compliance for government, so we have to do audit logging. So lots and lots of stuff, clearly we need an operating system to help us to manage all of these things. Even though this code is newer, it's not all legacy like the low-level printing stuff, unfortunately it's not all that much better. It's still written in C, it still uses pointers, it still misuses pointers. You might think that C++ 14 and 17, they do add some new features that are helpful, so like smart pointers and move semantics, the new features allow you to write better code, but unfortunately C++ doesn't require you to write better code, so people still just kind of write code the way they're used to. So our approach to get Rust into the system was to just do it. We didn't ask anybody for permission, we just kind of picked a project that was off to the side that wasn't very important and said, all right, we're going to write this in Rust, and we kind of did that for two reasons. One is that if management came along and said, no, you can't use this, then it would only take us a couple days to rewrite it and it wouldn't be that big of a deal. But also so that most people didn't have to know or care that it was there, it just kind of built and if you weren't touching that particular component, you could continue to live your life as a C developer and pretend that nothing had changed. We also wanted to keep Rust in its own processes in the early days, primarily because of the problems I mentioned earlier where people are writing through old pointers and validating random memory, we didn't want some busted C code to corrupt Rust data structures and cause the Rust thread to crash and kind of give Rust a bad name for no fault of its own. So we wanted to keep it nicely isolated on its own. Unfortunately, that experiment did go very well. As people found out about it, nobody really cared because it worked and we explained the benefits of it and people kind of bought into that and said, okay, that makes sense, keep doing that. And so that kind of gave us the confidence to go to more complex components. So we still didn't want things that had a lot of dependencies, we didn't really go to the core of the system. Being a legacy embedded system, you get a lot of the everything depends on everything effect. So we did try to stay at the edges and that was primarily so that we didn't have to deal with FFI, so we didn't want to have to generate stuff from our headers, we didn't want to have to generate headers for C code and there's extra build system complexity there as well. Unfortunately that has still gone really well and we actually have probably about a dozen people in the company now that are competent in Rust, four or five of us that are pretty passionate about it. And so we kind of now treat anything as fair game. We've done some fairly large projects based on Rust now. We particularly try to push the use of Rust if it's something that's security sensitive where memory safety really gives you a lot of benefits. If it's something that deals with protocol parsing, CERDE is really good with that, there's other good crates in the Rust ecosystem for that but we've used CERDE in multiple places and been really happy with it. Similarly for the client service stuff, I mentioned that I was on the network team and saw that we had lots of opportunity to use things like Tokyo and Futures which we've been using a little bit and we're very excited about East Inc. await coming to the standard library and looking forward to doing that. Kind of the one area that we still try to shy away from in Rust is if it's anything that requires unsafe code or if people just think they need to use unsafe, we really try to discourage that. I've joked that since I run the Git servers, if I have to, I'll put in a hook that will reject anything that adds unsafe to a.rs file. Fortunately, I've not had to do that yet. Again I know that not everyone here is an embedded person so I wanted to give a little bit of background about what is BitBake, what is this build tool that we use. The point of BitBake is to let you build your own custom Linux distribution. You can choose what architecture, you can choose the compile flags you want, you can choose how big or how small it is. It's kind of analogous to build root, that's another open source tool that does a similar thing. It's part of the Octo project which has quite a bit of industry backing, Intel's behind it, a lot of the single board computers and other systems support that. We've really gotten the benefit of the open source community and the support that's gone into it. As far as the history of the code base, it actually goes back to Gen2's eMERGE. If you've ever used Gen2, ever looked at their recipes, the syntax that they use, you'll be pretty at home with how BitBake builds things. The point of these recipes is you'll have one recipe for each software component in your computer system. You'll generate an RPM, so you'll build up a whole bunch of RPM packages and then you'll have a recipe for an image that just says, here's all the different packages that I want to pull in and it takes care of handling the dependencies and generating the root file system, putting in a cram file system, whatever type of transport that you need to actually then program it down to the device. Here's a little bit of an example recipe. You don't really have to understand most of this, but there are a couple of interesting things that you can deduce from it. One of them is that BitBake wants to fetch the code for you. It wants to know where the code comes from and it wants to do the fetch itself. That's not really the way Cargo works. Similarly, it wants to know all about your dependencies. It wants to know what components do you depend on so that if they change, BitBake knows to rebuild you. Already, we can see where BitBake and Cargo are going to be playing in the same area and might be some tension there. The way that from a very practical perspective, the way we're building our Rust code into our system is we're using a project called MetaRust and full disclosure, I am one of the co-maintainers now. I didn't start it, but I am involved in that now. What that project gives you are the BitBake recipes for the compiler itself, for the standard library, and the rules for being able to easily cross-compile stuff. It's not too hard to build Rust in a tool chain for native compilation, but if you're building it for ARM or some other CPU that you're not running on, it gets a little more difficult. MetaRust takes care of all of that component for you. Then there's another tool called Cargo BitBake. What that is is a helper program that you can run from your own crate, and it will generate for you a BitBake recipe. You can go drop that into your recipes repository, and now BitBake will be able to build your Rust code for you. In particular, it looks at the Cargo lock file and figures out what all of your direct and indirect dependencies are. This allows us to hook into BitBake's ability to fetch code. Now we can play nicely with source mirrors and caching, and really just try to play along with BitBake's model of the world as much as possible. So far, our experiences have been really positive. One of the things, I'm not the first person to have said this, but it's another vote for if it builds, there's a very good chance that it'll work. We've seen that in lots of cases where, yeah, we may have had to fight with a borrower checker a little bit, but once we got through there, it did the right thing. It worked, and not only did it work right away, it continued to work. We didn't push it out into the testers, and then they found a bunch of corner cases and holes. We wrote it, got it working, and then we're able to move on to the next thing without a lot of downstream work later on in the development cycle. The corollary, for me at least, is anytime I try to write on safe code, it will crash. I'm thinking I'm 100% on that. It's especially kind of appealing for C developers, you might think, oh, well, I'm used to writing C code. C is unsafe, therefore unsafe as Rust, and Rust can't be that much worse. But actually it is, and the reason for that is that the Rust compiler will make a lot more assumptions about the system and your code than a C compiler will. So you actually have to maintain a lot more guarantees, even in the unsafe parts of Rust that you might not be used to coming from a C background. So one of the good things is that as far as runtime memory use goes, it's fairly low, it's for our purposes low enough, and it's pretty predictable. The memory model with borrowing and putting most things onto the stack instead of the heap makes it fairly predictable, whereas in an equivalent C++ program you're more likely to rely more on dynamic memory management, which makes it a lot harder to profile and predict how that code is going to work. So really the main bad side, unfortunately there are some bad sides, are around the code size. So as I mentioned earlier, on the very low end, we have products that fit into 128 megs of NAND. And even without Rust code, stripped out about as much as you can, just the kernel and libc and systemd and a couple of very other low-level things gets you into 30 or 40 megs, and you're not even into your application code yet. So there's not a whole lot of room to play with. We did start introducing Rust when we were doing the high-end devices, so the way our product cycles go is we'll do a few high-end devices, a few low-end devices, and kind of bounce back and forth. So everything was great on the high-end devices, but then we really had to look at how efficient we were being with code size and how we were spinning that on Rust. So now normally, from a C background, I would think, okay, these multiple Rust binaries that we have, there's a lot of duplicated code between them, so we can just use a shared library, move all the code that's shared into the shared object. Now there's only one copy of it. Everything will be great. Well, MetaRust supports that a little bit. It does have the ability for libstandard to be built as a shared object, and that helps for at least that part of it, but it really doesn't help you for your intermediate crates, things like futures and Tokyo and Saturday, and all of those other guys that are going to be used in a lot of different projects. Those are still going to end up duplicated. And really, at the end of the day, Rust shared library support is not all that mature. There's no standardized ABI, so that's one of the reasons why the devs kind of discouraged that use. That's not a big problem for BitBake because it builds everything in a single pass and kind of knows when it needs to rebuild stuff to make sure everything's consistent. But then even when you do use shared libraries, there's still going to be code for generic code, stuff that gets generated. That's going to end up in the individual binaries. So for example, I'm not sure if this particular example is true, but say if you use a VEC of U8 in multiple places and multiple programs, there's going to be code that the compiler generates and that's not necessarily implemented in the standard library, so that may end up in your multiple executables as well. And this is kind of understandable because the use case for dilib and Rust is not really to reduce code size. They're not really worried about my particular case. They just want for you to be able to make a shared object that you can link against or that you can deal open for FFI purposes. So given that use case, it makes sense that they really want to put as much code as possible into that shared library so that it's standalone and self-sufficient, but that doesn't really help the case when you have multiple of them and the code size implications of that. So the first pass that we took at trying to solve this problem didn't work, but just so that no one else wants to go and try it. Our thought was, well, cargo really wants to build lots of static binaries. It wants to do the full pass all by itself. So we'll just not use cargo. We'll invoke Rust directly. We'll write a recipe for every individual crate. We'll write a recipe for regex and we'll write a recipe for memcur and all of those guys, several dozen of them, and we'll just invoke Rust C directly and generate shared objects for each one of those. And while that is bad, it wasn't as bad as it would be in other languages. At least the highly regular structure of Rust projects meant that that was kind of a tractable approach. You always have a lib.rs that you can point Rusty at and it knows how to find all the other source files based on that. If you were in C or C++, every project is completely different and there's no regularity. It's really not tractable at all. But even though you can kind of do it, it doesn't scale very well, especially when you start getting into the more complex crates like Tokyo and futures. And really what completely kills this is build.rs. If it's a crate that wants to build and run Rust code to then figure out how to build and run the rest of the Rust code, we really just have no way of being able to support that. So the current solution that we're using is, well, if having multiple Rust binaries waste a lot of code, what if we just only have a single binary? We'll take all of our Rust in the system and we'll link it into a single large object. And we call that uranium. So it's kind of a super binary, kind of like Busybox if you're familiar with that. So what Busybox does is it takes a bunch of common embedded tools, including like the shell, LS test, grep, more, all of those guys, links them into one program. So this is kind of the same idea, but with our Rust code. So the advantages of that are now that we only have a single Rust executable, we don't need shared objects, not even for libSTD. And so the nice thing about that is we no longer have the parts of libSTD that we don't use. With static linking, if you don't use it, it doesn't get pulled in. So you're not paying a code penalty for that. So that's nice. That also extends to all of our intermediate components. So now Futures and Tokyo and Surde and all of those guys, again, they're only getting linked in one time. The parts we don't use, we don't have to pay for. And similarly for any generic code that gets generated, the link time optimizer is smart enough to figure out. These are the same functions for the same purpose, even though they're being used by multiple downstream dependencies, so I only have to include them once. And from a theoretical perspective, this is almost the best you can possibly do short of the compiler just generating less code. We have only the code that we need, and we only have one single copy of it. So just to go into a little more nitty-gritty of how we implemented this in case this is an approach that might be helpful for someone else, there are downsides to it. But basically we went into all of our Rust programs and converted them from instead of being bins to being RLibs. And then we used Git submodules to kind of pull them all into a single mono repo so that BitBake can check out all of our Rust code in the system all at once. And then we can use cargo to build it all in a single pass. Then the real main function, the uranium main function, just looks at argv0 to figure out, okay, which program did you actually want to run? And then it dispatches to the correct main function and the correct crate. So like I said, that'll build everything in a single pass of cargo, which that itself has pros and cons. But it does avoid one of the downsides of this approach from C. So as a C person, if you told someone you were going to link all of the code that was completely unrelated into a single binary, you would give pause to that. And one of the reasons that you would not want to do that is that you can grow undesired, implicit dependencies from unrelated parts of your code. Because C symbol visibility is basically everything can see everything, you actually can call a function from a completely unrelated piece of code and the compiler or linker will be okay with that because at the end of the day, they're all in the executable, they're all visible, and that'll work. So Rust at least does protect against that particular software engineering danger because unless you list the dependency in your cargo tumble, you're not going to be able to see it. You won't let you use it even though you do all get linked together in the end. And so just to give a little more color to this, there's actually are still two components, the LibEranium shared object, which is where the bulk of the code is, and then just a tiny executable that all it does is thunk over to the main function and the other guy. And we do that so that in the few places where we do want to do FFI, we can directly link against this without having to have another copy and start incurring those code penalties again and also let's just do DL open. Oh, the first pass of that, we actually just had an executable built as a position independent executable and we thought, okay, well, let's just DL open that or link against that. It's technically the same thing or almost the same thing as a shared library. That doesn't work. Don't do that. So there are some downsides to this. One of the bigger ones is the build time. So because all of your Rust code is getting built in a single passive cargo, that means if any of your Rust code anywhere changes, you have to rebuild all of it. So the caching model that BitBake uses is per recipe. So if anything that is in that recipe changes, then you're going to have to rebuild that whole recipe. You don't really get to benefit a whole lot from incremental compilation, especially not for system builds. Local developers might get it a little bit, but for any kind of a system build, it's going to be a clean build from scratch and it takes a fair amount of time. It also adds some extra hurdles for developers as they're trying to integrate their code. So in addition to actually checking your code into the Rust repo that you care about, you're now also going to have to go into our Mono repo and update the get sub module over there and then finally go into the meta layer where the BitBake recipe lives and change the source rev as well. So while it is maybe just one extra step that is a 50% overhead, it's not great. And another big problem is if you have lots of different subsets of this code that you use. So in our use today, we pretty much use all of the Rust code everywhere. We just kind of have one configuration, if you will, of Rust programs. But if you wanted to have on one image, say, applets A, B, and C, and on the second image, B, C, and D, et cetera, that does not scale well at all. It's like an in factorial scaling or something. So if you have a lot of different flavors, then you're much worse off than you are in the other model that I talked about where you have a single recipe per Rust crate. So we do know about some things we can look at into the future that might make this better. And again, if you have other ideas for how we can improve this, definitely find me in the hallways or afterwards to talk about this. But one thought we've had is that we can use kind of a private registry, and that will get around some of the downsides of using the get submodule approach at least. So rather than using get submodules to track that, we can actually have a registry push version releases into our registry and consume them that way. That does require that you version your code correctly, which we're not always great about. But we think that that's probably a little more of a Rust-like way to handle the problem. But I think what would really benefit us the best is if cargo had better capability to do its work not all at once. If we were able to run cargo in one place and use the output of that run of cargo into the next place so that we could still have multiple Rust recipes, each running cargo, and each benefiting from the output of previous runs of it. I'm not a compiler guy or a cargo guy. I don't know how easy or difficult that is. But that's kind of the model that works with us because that's kind of the model that bit bake and a lot of other build tools, similar build tools, NixOS and other guys have similar problems. So I did want to circle back to and plug the Zephyr Rust project just a little bit. So that's actually primarily Tyler Hall if he wants to wave the people back there. If you're curious about this, you can talk to him afterwards. But basically this allows us to run a Rust app on the Zephyr kernel. I should probably should have said what Zephyr is. It's a real time operating system for really small, primarily Internet of Things devices. Usually it's order hundreds of K of RAM and devices that don't even have a memory management unit. So even if you wanted to run Linux on there, you really can't. So today it has the capability to call any Zephyr Syscall you want as an unsafe call. And it also has safe wrappers for a lot of those. We do have some initial support for being able to do async await for certain types of IO and not network but UART and things like that can be used with async await. And in particular it does not require no STD. A lot of the projects that target these kind of bare metal systems require that you not use the standard library at all. So the entire standard library does compile and anything without OS specific so that you're not using STD Sys, it will build. It may or may not run because a lot of it is stubbed out. But that at least makes it a lot easier to get your existing code running in this environment. It's available on GitHub and that's the link. And that's all I've got. Thank you very much.
Discuss the what to do, what not to do, and the “we’re still not sure if this is a good idea” parts of using Rust in an existing code base that is both resource-constrained and littered with history. Specific topics will include building Rust projects inside the bitbake build system, as well as balancing different deployment methods against code size constraints.
10.5446/52182 (DOI)
Hello, I'm John. My name is Jim John Knapp. I am the sole person at a company called Coffee and Code. Hi, nice to meet you. I'm from Akron, Ohio. So I'm going to tell you about RustBridge, or how to run your own RustBridge. Quick show of hands. Anybody here have any like bridge events before? Very small amount of people. Cool. Well, that's OK. So a little bit about me. So I know about NIL in Rust. That's a joke. I actually know none in Rust. Thank you. One person is awake. That was like the best joke. OK, thank you. So I'm lying. I have a little bit of knowledge in Rust, but I actually don't have a whole lot. So I am a web developer by trade. I do a lot of consulting work. But Rust hasn't found its way into my production workflow very quickly, I would say. And so I get to play with it in my spare time. But during the hours of work, it's kind of not seen or adding into my knowledge. But I'm very intrigued. I'm also busy. I have two small kids at home. Yay, kids. And everything that comes with them. But so what I wanted to do is try and find examples of ways to learn more Rust, or force myself to learn more Rust. And I found this RustBridge.com. So RustBridge is a focused, it's trying to be a one day workshop to focus on getting underrepresented people with backgrounds in potentially another programming language and trying to get them exposed to Rust, and then maybe help them learn some more resources to continue their learning. And so the really cool thing that comes with this is they have a ton of repositories and a ton of information and a ton of ways to make it extremely easy for somebody who has very close to no knowledge in the language itself to be able to teach other people that language. So everything is there. You get an intro to Rust, materials as well as presenter notes, so how to talk about different things. So it's a brief introduction, trying to compare and contrast some concepts that you would see in other programming languages. There's information for organizers about how to find attendees, about how to find a venue. It references the Rust code of conduct. But it also makes it very clear and to the point that these are things that are important and that you should bring to your event right off the bat, as well as some sample applications that you can build as a group. So you have the introduction to Rust, usually as about the first half of the day. And then you can come back in the second half of the day and do something a little bit meatier with the new knowledge that people have gained. So what did I do? Well, first step is I found some other organizers because I needed some people to keep me accountable or else I would have just dropped off. So I started using Trello for organizing different kind of tasks. We started meeting possibly every week, just in the evenings remotely, found a venue, and was able to put down some cash to lock that down, ordered some cool stickers that look like this. So I am from the Northeast Ohio area, so I did a Cleveland Rust bridge. Created a PDF for bulletin boards. So this is something that I probably wouldn't have thought of right away, but somebody else recommended it to me. So universities like to print things and put it somewhere for students to look at. And so having just like a little one-pager is a really nice way to kind of bring in some people. We got some case students this way. Try to find some sponsors. Use Google Forms for registration and follow-up surveys. And then we actually internally, as we were planning this with the other organizers, we built some custom kind of applications to play around with that we would then use as teaching material for the second half of the workshop. So again, forcing myself to learn things that I wanted to learn for the sake of teaching it to other people who potentially interested as well. We then did some trial runs of the material and kind of talked about our notes in the evening, so we didn't look like fools when we talked to people about them during the workshop day. And how did it go? Well, we got a really good turnout. We got some interesting feedback. People really enjoyed the content. We got this back, so it says the presenters were more than qualified to teach the language. So I had no problem asking them a million and one question. So this is just a quote from one of the people. Again, I had very, very, very, very, very little knowledge in Rust itself. And so I hope that you can use this as a motivation, that if you are interested in trying to make some more people interested in Rust in your local community, that you can think about this as potentially doing a workshop yourself. And you can find more information at wretchbridge.com or feel free to ask me some questions. Thank you.
Lightning talks are 5 minutes long, on any topic, by anyone. Proposals are voted on by attendees and selected by conference organizers. Not all lightning talks have slides, and due to some technical limitations, not all slides were captured.
10.5446/52186 (DOI)
Hello, I'm Holden Marcisian. I go by Ospeal on the Internet. You may not know me as the Windows Maintainer for Winnets, which is the main pure Rust window creation library. We currently support Windows, Mac, Linux, iOS, and as of yesterday, we support the web. Today, I'm going to be talking about the various baffling things all the platform-specific windowing systems do that make this job hard. Naturally, since we're covering everything this talk will be a couple hours long, so hang tight. There we go. So, let's say you are designing a new windowing library. This is probably the first API you try. This has a couple of benefits. It is easy to understand. It's nice and rusty. It's easy to maintain. It's wrong. If you aren't writing desktop games, like if you are writing desktop games, this mostly works fine. But then, and I'll just quickly illustrate how you use that, your application constantly spins and looping it starts like greedily pulling all the events out of the US event queue. After that, you process your events and redraw your window. If you're writing a game, this is pretty much all you need to do. You like high frame rates. You're going to consume all these resources by necessity, and since games need to update at least 60 times per second. But then you get to desktop app developers, such as myself. You start complaining that the constantly consuming resources doesn't work because unlike games, desktop apps have to play nicely with other apps and make sure they have time to run their own code. So, OS API is exposed function that allows you to wait until the OS has a new event and only wake up with the program when that happens. And that works great. And from the user that is a perspective, this isn't all that different, but it doesn't eat all of your CPU and your application sits in the background when it doesn't need to work. So, you think you've satisfied with them, but then desktop developers ask, can I have multiple windows at once? And the previous model didn't really work with multi-window environments because you can only wait on a single window at once. So, at first, you tell them to spawn a new thread for each window in which developers, like, desktop developers accept begrudgingly because they don't want to work on this low-level stuff. So, you've satisfied the Windows developers. You've satisfied the Linux developers and the MacOS developers tell them and tell you that their program crashes when you have multiple windows. So, you go ahead and test this. And it turns out MacOS cannot run event loops off of the main thread. So, that is annoying, but you can work around it. But you need to go ahead and design a new API. So, now you go ahead and create a separate event loop struct. All the windows reference that struct and all the events get thrust into that event loop and everything works fine. All the events get delivered to one place. Everything seems to work and then it doesn't. This time, both Windows and MacOS programmers have the issue that whenever you try to resize one of the windows, the entire application freezes. This is because Windows and MacOS only expose pole events and wait to events to lure you into a false sense of security. Whenever a resize starts, the OS starts its own internal event loop and only returns control flow to the main event loop when you are done resizing. You can't actually put any of your application logic outside of those functions. Instead, you have to pass a closure to the OS and put all of your code in there. So, you do that. You stop using iterators. You take complete control over the event loop. That isn't particularly rusty, but Microsoft forced that on you. So, you go ahead and accept that. But it's much less readable, but it works and that is what matters. Then you decide to put your library to iOS and iOS doesn't have pole events or wait events. It only has run. And we are already just exposing run. This mostly works, but the iOS run never ever returns. So, you modify your return type. It returns the never type and it works on iOS. But then you, but how are you supposed to expose that on desktop because desktop applications like to return? I am aware of two main hacks to make this work. Either you run the OS event loop, you panic. This is obviously a bad solution, but it would technically do what you need. Then the other slightly better solution is to use a process exit to abort the application at the end of the event loop, which is what Windows is currently doing. Please tell me if you know a better solution, nobody likes this, including myself. So, now you've conquered the desktop, you've conquered iOS and now you want to move on to the internet. So, we port when it's WebAssembly. The problem is that in WebAssembly, run has to return always. Because in WebAssembly, the browser never surrender's control over the event loop. You pass your closure to the browser. You're running main, you pass your closure to the browser. Then you return from main to the browser and it only calls that closure. So, that kind of sucks if you want to support iOS and the Web in the same API, but there is a solution. You start out, you set the event handler, then you throw a JavaScript exception, kill the stack, catch the exception from where you're calling into init, then return from that function as normal. Now everything works, you can compile code once and run it anywhere. Now you have to deal with graphics, which you don't have time for because I only have a couple seconds left. Thank you all so much for listening to this talk and thank you to all the Winnet maintainers because this is far more work than any single person can ever do.
Lightning talks are 5 minutes long, on any topic, by anyone. Proposals are voted on by attendees and selected by conference organizers. Not all lightning talks have slides, and due to some technical limitations, not all slides were captured.
10.5446/52187 (DOI)
Hello. This is a result of some research a bunch of us did yesterday, picking this all up. If you've watched some YouTube, you might recognize this screenshot here. If you don't recognize it, it's from this video, Watch for Rolling Rocks in half an A-Press. The idea is there are a bunch of people who work to play Mario pressing the A button as few times as possible, so they can't do normal jumps. Now, if we want to write a Rust program like that, it's simple. We'll just start. We'll make a new pro-oh. We can't type an A in cargo. Well, bash, we've got a simple solution for this. We'll use printf for our command and we'll type C, backslash, RGO, new. You can type whatever you want like this. That's a little bit boring, so we're going to skip that and show real commands. You can always do this. Now, if we want to start writing our Rust program, maybe we start... We can't type main. Well, there's a way to add your own entry point, so we're going to make an entry point and we'll mark this... Can you hold the microphone? We'll mark this as start... We can't use write start. Also, we wouldn't even be able to write a feature if we wanted to. There's some weirder stuff. There are some... You can name your symbols, right? If we do link name... No, we can't say link name. What if we just know... We can't know mangle. There's something you can do with a stat... Okay, we can't static. What we really want is some way to get a program running that doesn't need the A. It turns out we've got a nice way to do that. Let's write in a... Not main. Hello. We'll write test. You can write a program like this. If you run that with the test suite, we've got to say no capture here, but that works fine. We get a hello world. We've got a bunch of extra outputs. We can do a little bit better, live and quiet. We've still got some nonsense. If we want to get rid of that, we can do some extra printing with a bunch of ANSI escape codes that instruct the cursor to move around and erase some of our stuff for us. If we run this, we get just plain hello world. Maybe we're not satisfied. We're not satisfied. We don't like that we're cheating with A's and the command line. Let's go a little bit simpler. We've got hello. Now, we don't want to type no capture. We need to reopen standard out. We're going to print all our stuff and write it out. We can't write unwrap, of course. Then we exit so it doesn't type the rest of the test runner stuff. Now, we can type rust C test. Run our test... Oh, hello. Hello. Quiet. We get hello world. Thank you very much. Oh, thank you. Make sure to subscribe. Bye bye.
Lightning talks are 5 minutes long, on any topic, by anyone. Proposals are voted on by attendees and selected by conference organizers. Not all lightning talks have slides, and due to some technical limitations, not all slides were captured.
10.5446/52188 (DOI)
Okay. Thank you very much. Yeah. So I'm Nico Mazzakis. I work on Rust. I'm on the core team, the compiler team, and the Lang team, so I'm like a little bit involved. And I want to talk to you about Polonius, and I want to address up front the most burning question, the most common question, what, where is that name even coming from? Some of you who may recall your high school education or some other education, you may remember Polonius as that dude who gets stabbed in Hamlet. He's behind a curtain. I don't remember why. Don't tell my high school English teacher. But he also is famous for this quote that he says to his son, neither borrow nor lend or be for loan off loses both itself and friend. And I'm sorry to tell you, it's not a well-known fact, but Polonius was a C hacker. And he was passing on like professional advice to his son, just saying like be really careful when you mess with references because you could, uh-oh, what's wrong? All right, you're interrupting the flow of my joke. So the good news is he didn't actually die in Hamlet. He recuperated in the hospital in the time he read the excellent program, the Rust programming language by Carol Nichols and Steve Kladnik, and he got really into it. And he has since adopted Rust, and he's given a new quote, you know, borrow, lend, whatever, compiler's got your back, go out and build your dreams. And we're here today to talk about that. So yeah, so that's what, so Polonius is named in honor of Polonius. And what it is is it's a kind of reimagination of the Rust Barochecker. And as an end user, you won't notice if we ever do adopt it for real. This is all still work in progress. You won't notice much difference, except that more programs work than used to. But from the inside, from the way that Polonius is structured, it's very different. And that's what I'm here to talk about. So what I'm going to do is I'm going to go through kind of the classic Barochecker error, decompose it, explain how the current Barochecker thinks and analyzes and finds errors like that, and then show how Polonius does it, and then show you why the Polonius approach allows us to accept more programs and has potential for the future to open even more doors. So this in my mind is the classic Barochecker error. What you have here is you have a local variable named x. It's not quite as readable as I would like, but hopefully you can read it. You have a local variable named x, and then you, it gets borrowed here. So y equals ampersand x. And that Barochecker creates a reference to x, which is a shared reference, which means x is immutable while that reference is in use. And on the next line, we try to mutate it. And indeed, on the line after that, we go and use the reference. So we get an error saying you can't mutate this content because it's currently shared. And the compiler actually does a pretty decent job of saying like, you know, here's the Baro, here's the mutation, and here's the use that comes later. And you can sort of see that they're sandwiched in between. Now if we try to take that error and make it like a little more formal, we might phrase it sort of like this. You get an error at some program statement n. If that statement n accesses a path p, okay, so statement n, that's the x plus equals one. That's where the error is detected. But what is this thing path? What is a path? So what I mean by path, a path is basically some expression that leads to a memory location, right? So a local variable x, that's a path because on your stack, there's a slot that stores the value of x and that's the memory location. Paths can be composed. So x dot f is some field of that local variable, right? So it takes the whole variable and narrows down to just the field in question. Star x dot f then follows a pointer off into memory. So now the memory we're referencing is often the heap, but it's still memory somewhere in the computer, right? And then you can also have indexing. Those are paths. Other kinds of expressions like the number 22, calling a function, those are not paths. Those produce values, but those values then have to get stored somewhere. So a path is kind of something you could assign to. So okay. So we've got now, now we know what a path is. So we can say, all right, we get an error if there's a statement n, like x plus equals one, modifying a path p, like x, and accessing the path p would violate the terms of some loan L. Okay, well, hold up. What's a loan? Well, a loan is the name I'm using for maybe obvious reasons for the result of borrowing something, right? So when you borrow x, the compiler in its mind has a loan. And that loan is saying, it's tracking the fact that x was borrowed. So it tracks what was borrowed, which is some path, and a mode, either shared or mutable, right? And when you have a loan, you can violate the terms of the loan by doing things to the path that are not allowed. So if you have a shared loan of some path p, then mutating the path p would violate the terms of the loan, right? Because the idea is, when I share something, everybody can share it, but nobody is supposed to change it, right? And so mutating would change it. Similarly, if you have a mutable loan, just any access to the path p is a violation, because the idea of a mutable loan is saying, I have created a reference, and now that reference is the only way to access this memory. So if you go and use the original path that led to the memory, you would violate the terms, okay? And you'll notice I wrote directly or indirectly. All that means, that means a few things, but one of the things that means is, like in our example, we have a kind of direct violation. We lent out the path x, and then we mutated the path x, exactly the same. Indirectly, well, you might also have a structure, not just a, like, just an individual value, but a structure, like here, some struct, and then we might lend out that whole structure, and then mutate some field of the structure. Now we're not mutating exactly what we lent out. We lent out x, and we're mutating x.field, but it's good enough. It's still a violation. Okay, so let's go back to our error. So we have here the statement n, accesses a path p. Accessing the path p violates the term of a loan, so we can apply that to our thing and saying x plus equals one, the loan here is the ampersand x, and we're, and mutating x violates the terms of that loan l. And we have one last condition now. That loan l has to be live. Okay, okay. What do I mean by live? So that's compiler jargon, but it's pretty simple. Something is live if it might get used later on. So really, you don't just say it's live. You actually say it's live at a certain point in the program, and at that point, if there might be some later use, then the thing is live. And usually it's used for variables, like local variables. Right, so if you think of this program, at this point, we might say is the variable x live? And actually the answer might surprise you. The answer is no, even though clearly there are uses of x later on. And the reason for that is that although there are uses of x, we're not using the value that's currently stored in x. Right, we store one here, and then we immediately overwrite that with two. Nobody ever reads the value one. So in compiler terms, you would say that variable is not live. It's value, its current value will never be used. However, if we go to the next line, now the value, the x has the value two, and this is live, because if we don't go through the if, then we might read two here at this print. So it's x is live here. Interestingly, if we jump inside the if, like just before we assign four, then we can say it's dead again. It's not live here, because we're about to overwrite it. And it's also dead, or not live, I guess, either one, at the very end, because there's presumably no more uses of x that will come. So that's what a live variable is. So what do I mean by a live loan? Well, if you think of a loan, create some reference, right, ampersand x creates and returns a reference that's going to get stored and passed around. And the loan is still live. If that reference or some other reference that was derived from it might get used later. Okay, so by reference derived from it, this is what I mean by that. Here I have a little snippet, which I hope you can read. Okay, which I start out with one loan right here, where I'm saying y equals ampersand foo. I'm loaning out the variable foo. And then I create a new loan of y here, or sorry, I create a new loan z, which goes to y.bar. And the point is, we're not going to use the reference y anymore. So you might think, oh, I guess it's dead. And yes, the variable y is dead, because we're not using y directly. But the loan is not dead, because now this variable z is kind of based on it, and it came from it. And so when we use it here, we still consider the loan to be live. Okay, good. So now we actually have our complete template for what the classic borrowchecker error is. There is an access to a path at a statement n. That's right here. We're accessing the path x. Then that path was borrowed in some loan. And the kind of access is not compatible, so it's mutating. It's a shared loan. And the loan is live, which it is, because it might get used later on. So if you look at this error, you can see two things. Well, one thing, actually. What you can see is that this first two statements, these are kind of very directly figure outable from the source line in question. You don't have to do anything complex. If I see x plus equals one, I can immediately see that that's writing to the path x. So that's a statement n writing the path x. And if I look at ampersand x, I can immediately see that it's borrowing the path x. It's a local property. But figuring out if a loan is live, that's a lot more complicated. That actually requires us to reason across the program and figure out, like, will there maybe be a future use? And that's not surprisingly where all the complexity of the borrowchecker comes from. So how do we do it today? Well, what we do today is we compute this thing called a lifetime. And you've probably heard the phrase lifetime if you've worked a little bit in Rust. It came up in some of our earlier talks. Well, what is a lifetime? So in some sense, you might say, oh, it's this tick a syntax or tick whatever that you've seen in Rust. But what the compiler thinks of a lifetime as internally is actually sort of different than that. It is the part of the program where that reference might be used. So I have like an ampersand u32. Part of that type, that ampersand, is associated with the part of the program where it might be used. And what do I mean by part of the program? I can make it really concrete for you. This is our program. We can add some line numbers to it. The part of the program where it might be used is just like a set of line numbers. That's actually how the compiler thinks about it. It doesn't use line numbers. It makes a control full graph and uses a set of nodes. But it's the same basic concept. So we could say, for example, the variable y might be used on lines two and three. And this is how the way we actually compute this is we're going to do something, that's the result we're going to get. But the way we compute that result is through a process called lifetime inference. And how it works is we make little variables. We basically give for every reference and every lifetime, it appears in any type, anywhere, we make a fresh variable. And this is a variable in the sense of your algebra class where you have a constraint set of equations like x greater than equal to one and y greater than equal to x. And you solve for a value of those x's and y's. The variable in that sense, algebraic sense. And the compiler's job is to figure out what set of lines does tick zero and tick one represent. And I gave them these numbers instead of names to show you that it's not like real syntax that you can actually type. This is the compiler's internal reasoning. And the way that it does that is through two things. It kind of figures out relationships between these variables. And let me just explain what these two references are for a second. So the first one, the ampersand x expression, that is going to create a reference. So that sort of creates a value of type ampersand u32 that's going to get then stored later into y. And that value has a type, which is ampersand u32. But it's actually ampersand tick one u32. We're computing, because for every reference, we have to have a lifetime. So that type needs to have a lifetime. And we're calling that lifetime tick one. And then this tick zero, that's the type, the stack slot has a type. And that's the reference that appears in there. And so clearly, there's a relationship between these two lifetimes. So I'm going to take the reference that I make, and I'm going to store it into the stack slot. So the stack slot's type has to be related to the type of the values that get put into it. If you think of like Java or something, you store a string into a thing of type object. These are two different types, but there's a relationship between them. If that made any sense. That's a subtyping relationship. Anyway, leave that aside. Let's look at the actual values. So what is tick zero? So we want to compute what are the lines where this reference y might get used later. And we basically do this based on a liveness rule. So we look at the variable y and we say, where is the variable y live? In the same sense that I talked to you about it earlier. Where might it get used later? For every line where y is live, all the lifetimes in y have to include that line. So in this case, y is live on lines two and three. It gets assigned on line one, so whatever comes before that doesn't matter, because that's an old value that's over written. And then we assume this is the end of the program, so there's no further uses. So it's live on lines two and three. And so tick zero is going to be like a set of two and three, two different lines. Meanwhile, if we look at tick one, so this is that reference that got created by ampersand x and it's getting stored into y. And if you think about that, reference is never directly used, so to speak. I mean, it's sort of used as part of the assignment, but you can't, it's like a value that can't be named until it's stored into a variable. So it's never live in that sense, but we can still, we still are constrained because it has to outlive tick zero. In order for us to be able to store a reference into a slot, that reference has to live longer than that slot, right? Or otherwise you'd have like an alias, you'd have another copy of the reference with a shorter lifetime, that would be weird. You'd be saying the base references is going to get used. Sorry, that would mean you could copy from here and use from there. We'll use it for longer, that would be bad. So we get this relationship and we find out, okay, everywhere y is used, clearly this value is also going to get used because it's getting stored into y. So we compute the tick one as the set two and three. Okay, so when we're done, we get this result. We created a reference, it's going to live for two lines, two and three, and it's going to get stored into y and here's a lifetime. And now we can jump back to our definition and we say, okay, we have a lifetime of the reference, which is the part of the program where it might be used. And we know that each loan, now we want to figure out, sorry, what is the lifetime of the loan? We said the original question we were trying to answer was not what is the lifetime of references, but what is the lifetime of the loan? Is it live at a particular point? The answer is we just look at the lifetime on the reference. So look at the lifetime on the reference that is created by the loan, that's the set of lines where the loan might be used. Done. So now we can say, okay, here's our program. The statement n accesses a path p, that's line two. So the path p is x and it's a mutation. Accessing the path p would violate the terms of the loan l, which is up here on line one. It's because it's the same path and it's not, it's using a disallowed sort of access. And finally, we know that this loan is live because we just look at its lifetime right there. And we see that it includes the line two. So we can report an error. Done. That's how the borrower works, kind of. I'm going to expand it in one second. Now, what happens in Polonius? So in Polonius, it works differently. We don't have lifetimes. I don't mean that the rust surface syntax changes. They're still tic-a's and stuff. But the thing that they represent inside the compiler is different. We instead call it an origin. And the idea is instead of tracking where might this reference get used, like looking to the future, we track where did this reference come from? What loans might have created this reference? So we look to the past. And it's going to be a set, in other words, of loans. So that makes our inference. We still have to do an inference step, but it works kind of differently. It works in the opposite direction, so to speak. So if I create my two variables, tic-zero and tic-one, instead of inferring them to a set of lines, I'm going to infer them to a set of loans now. So let's start, last time we started with tic-zero, but because we're going in the other direction, we're going to start with tic-one this time. So what is the origin of the reference created by ampersand x? What loans might it have come from? That's really easy. It's just that loan, right? It's just the loan for this expression ampersand x, L1. It's a singleton set, because we just created the reference. That's where it came from. And so tic-zero, well, there's only one assignment to the variable y, so it must have come from there. So we also can say this is clearly just the set L1. If there were many assignments to y, like maybe y is a mutable variable and it gets assigned for many places, then it would be the union of all those places, but this is a simple case. So this is our end result. We have two origins, and both of them are the singleton set L1. So we're saying all the references here clearly came from this ampersand x expression right there. We can trace them. Now, notice something. Live-ness, I didn't talk about live-ness at all. But when I did the inference before, I had to think about where was the variable y live, and that impacted the end result. I don't talk about that anymore. I only talk about this subtyping, this data flow relationship. When you create a reference, where does it get stored to? That's all that matters for computing the origin. So now we come back, and now we have this question. Okay, now we know the origin of every reference, but the thing we want to know is, is the loan live? How do we answer that? And this is where live-ness comes in. So we're at some program point. We look at all the live variables at that program point and say, what is the origins of their types? In other words, what loans are in their type? So if we come back here, let's, this is our example. We have the statement n, which is line two. It's accessing the path x. That's in violation of the term l1. And now we want to say, is this loan l1? Is it live or not? So to answer that, we look at this point, line two, and we say, what variables are live here? Well, the variable y is live. It's going to get used later. And the type of the variable y includes the loan l1. Therefore, l1 is live. So we get an error. So the live-ness doesn't come in when we compute the types, but it comes in when we're checking to see if there's an error later on. Okay, I'm going to stop here because this is like pretty crucial. I know I'm not supposed to take questions, but this is a different sort. Does anybody have any questions on this? Yes? Yes, can I walk through it again? I will walk through it again. So we compute that there's an error because first, we know that on line two, we're accessing the path x, and that violates the terms of l1. So if l1 is live, that's a problem because we're mutating x, and that's a borrow of x. Right? That makes sense? So now the question is, is the loan live? And to answer that, we look forward from this point, from line two, and we say what variables might get used in the future. In this case, the answer is the variable y is one of those variables. And then we look at their types. Did any of those, is that a reference that might have come from the loan l1? And if it is, then we have a problem because that means the l1 might get used. Right? And we can tell where it might have come from because that's what we, that's the whole thing we're trying to compute. So we basically ask, is the loan l1 in the type of any of these things? And here's the type of y. It does indeed reference l1, and so we get an error. So I guess that would be the, the nutshell is exactly this, that we compute where everything came from, and then we look at what we might use, and see if any of them came from this loan. And if so, that would be a problem. Yes? Yes. So he asked if I can compare it to what was happening before Polonius. And the answer is, before, we, we sort of did the same thing in a way, but we did it in a different way. Before what we did was, when we were figuring out the references, we're figuring out their lifetimes, we looked forward. We, we weren't thinking about it in relation to a particular error. Right? That's the key difference. We'll see later. We were just computing in general, where might this reference get used? What is the set of lines where this reference might get used in the future? And then later, when we have a potential error location, we compare it against that set. Whereas now, we're computing in general, where did everything come from? And then, when we have a particular error location, we figure out and what might get used here, and we check if any of them came from that spot. Yes? So the question is, does it narrow the space that gets searched when it's doing this? I'm gonna, I don't know how to answer that question, because I don't fully understand it, but I think that it's going to get answered very shortly, which is, I think the question is, why does this matter? Like, why are you telling me this? And I'm gonna show you an example where it makes a big difference. All right, yes? Okay, so if line three was print x, would that be an error? The answer is no, because the type of x is U32. It doesn't have any loans in it. It's just a value. So this loan would, the variable y would be considered dead, and no live variable has a, has L1 in it. Okay, any more questions? Okay, one, one more, and then I'm moving on. Okay, if the question was, if I replaced y here with ampersand x, would I get an error? And again, the answer is no, because the variable y is still not live. The only live variable looking forward is x, and its type doesn't have a loan. And to go a little further, there's a, what you would then do is you'd be making a second loan, that there's two borrow statements, and that's a second loan. And yes, L2 might be considered live, but L2, L2 hasn't even started yet actually. So nothing that's live came from L2, so to speak. Okay. Okay, so we get an error. And now, as I promised, why did you take me through all of this subtle two versions of the same thing that sound kind of identical? The difference, it comes up in this example, which was one of the examples, I don't know how well known this fact is, but I'm going to claim it's little known. A little known fact is that when we did NLL, which is the non-Lexical lifetimes that we introduced in Rust 2018, we originally thought we would handle three classic cases, and we only got two. And the reason was, as to say, there were three things that are errors, and we wanted them to not be, and we only eliminated two of them. And the reason was that it turned out to be just really hard to do this third case, which is this one. It was computationally infeasible, and the analysis was also a lot more complicated. So we simplified it in order to get, you know, make some progress and come back to it later. And what is so complicated about this example? It's sort of, in some sense, nothing. So what happens here is, you have a function that takes a mutable reference to a map. It's going to return something out of that map. It's going to return a string that's in the map somewhere. And it begins by asking, does the map have the key 22 in it? And if so, it returns the value. And if not, it inserts a new value and then returns a reference to that new value that we just inserted. And this is a pretty common pattern. Of course, savvy Rust users will know they could use the entry API, and that would be cool. But part of the reason the entry API even exists was to work around the fact that you can't write this function, and it was super annoying, and we were like, oh, we could do entry. Turned out entry was also cool on its own merits. But so it would be nice if you could write this. Sometimes you'd like to write this. And the particular thing that makes this special is that it's a function, and we're returning this outside of the function. If you did this all within one function, it will work just fine. It's the fact that it's going outside of the function. And why does that matter? Well, first of all, here's the error you get. So you can see that some reason the compiler thinks you made a borrow here, and you loaned out the variable map, and it thinks that that loan is still live when you're calling insert. But it's wrong, right? Because if we think back to our definition of is a loan live, it was, is there some reference that came from that loan that's still going to be in use? And the answer is no. But to make that a little clearer, I want to explain it in terms of a lightly desugar version of that function. So this is like slightly more explicit Rust syntax. I did two things. First, I gave a name to this elided lifetime tic A. So that's the lifetime of the map that's coming in. And then I rewrote map.get to be the function call version. So in Rust, a method call like map.get can be rewritten as type colon colon get. And the first argument is actually ampersand star map. The star is kind of not that important here. The point is we're borrowing the map in order to call hash map get. And so it's exactly this borrow that the compiler thinks is still live here. But if you look, we're going to make a reference here, a shared reference. And the only value of that shared reference that's like still derived from that shared reference is V. And V is obviously not live here because V is not in scope here. However, the problem is what is the lifetime of V, actually? I told you that a lifetime is a set of lines. But actually, in this case, those lines are relative. When I said lines, I really meant lines in the current function. And the lifetime of V cannot be contained within one function. It's going to be returned from this function and given to a caller. Right? So actually, the lifetime of V is this thing tic A. Because it's saying it lives as long as the caller wants it to. Right? And so if you think about it, you can imagine here's the function get or insert right here. And here is the caller. So the lifetime tic A, what does that represent, really? That might be like lines two and three in the caller. But it's something we can't know in short. And there could be many callers, of course. So from the way that lifetimes really work today, they're actually one of two things. They could be a set of lines. So any reference that's completely contained within the current function, we can do the set of lines that we did before. But a reference that crosses and gets returned out has to be kind of blown up to one of these named lifetimes, like tic A. And the important part is that named lifetime is some part of our caller, and it includes the entire call to us and then some more stuff. So it basically includes all the lines in the function. And so for that reason, the compiler thinks, okay, I have a loan here. It has to be tic A. It can't just be some set of lines. And tic A includes my entire function. So when I get down here, well, that's part of my current function. It must be illegal. So that's exactly what I was, I think it was YouTube, but I'm not sure. When we were saying what is the key difference, it's exactly this. If you just think about how long does this live, the answer is it lives until some part of our caller returns, which is for the rest of this function at least, right? But that's not exactly the right question. You want a slightly different question. And that's where Polonius comes in. So with Polonius, we make a loan here, let's call it L1. That loan is part of the type of V. And we see that V is returned and we have to, there's some logic we have to do there. But when we get here at this point, there are no live variables that have that loan in their type. There are no variables at all that have that loan in their type. And so there's no error. A map that inserted is legal and everything's fine. Basically, the way Polonius lines up working is we have to check, that check, we are concerned with how long the reference will be live. We're only concerned with where did things come from at this moment. And when we return, we obviously have to check that this value of V came from something that has tick A, and we can do that. But it's not really relevant when V is not live. So that's why Polonius is better. And actually, this example, we had a way, the original NLL had a more complex formulation that could handle this example. But it was, as I said, computationally infeasible and kind of hard to think about. It turned out there were some more complex examples that came up later on that just could not be expressed in this way. You have to have the Polonius viewpoint to handle them. Because I forget why, to be honest. They're complicated. But there wasn't a way to do it. It was exactly this turn that made it possible. So that's cool. But actually, I think that this way of thinking of things as origins has other future uses that might be really great. And at this point, I should warn you that we're entering like wild speculation territory. Do not consider this to be language features that are in active progress. But one of the big limitations of Rust that comes up when you do more advanced patterns is that when you have a borrowed reference, it has to be tied to something on the stack, essentially. So what you would like to sometimes be able to do is to say, I have a struct called message that owns a buffer, which is a vector of strings. And then I have a reference that is pointing into something owned by that buffer. And you can't do this today. You can't have tick buffer here. It would have to be a parameter and be talking about something outside of this struct. We kind of want a struct that can reference parts of itself. And that would have been really useful for like async await and so on. Well, that's kind of what we have with async await, but we just cheated, essentially, by building it into the language instead of building this fundamental mechanism in. So how could we do that? I don't know exactly how this will work. There's a lot of complexity to consider. But one thing I do know is if we had this feature, when I'm going to create a message struct, I'm going to have to give it two things, a value for each of these fields. One of them is clearly going to have to be a buffer. And the other is going to have to be some reference that points into that buffer. And so I need to be able to check the compiler in order to make sure that everything's on the level is going to be able to have to check that that reference points kind of came from that buffer. And if you want to answer that question, lifetimes are the wrong tool. If we just know, if we have only lifetimes, all we know is where the reference might get used. That's not what we care about. We don't care about that. We care about where did it come from instead. And that's exactly what the origins are tracking. So it feels to me like this kind of opens a door to language features that were closed off with the old analysis. So I'm pretty excited about it. I want to tell you a little bit more about Polonius itself. And the current status is we have a working group as part of the compiler team that's exploring Polonius. The truth is I barely have any time to follow up with them and they're doing amazing, awesome things all on their own. And Gene, Q Gene, you have a role here. So I want... In case you can't see it on the video, Gene is holding up a please clap sign. A famous please clap sign. So here are some of the people that I can remember from the working group. There's probably somebody I forgot and I'm going to feel really bad about it later. So to you, person, I apologize in advance. But this is pretty exciting. We're making progress. You're welcome to join in. You can check out on the website. These slides are online, by the way. And you can see where we have meetings and so on. What we're doing is trying to extend the rules to cover the full borrower checker. Right now we handle that core borrower checker, but there's other kinds of errors to report and we want to handle those. And we're trying to make these rules not only more expressive, but really clean. So right now, the full specification, it's written in a language called Datalog. I'm not going to go into that, except to say that the full Polonius rules is like 22 lines of code instead of like 6,000 lines of code, which is pretty cool, because they're written in this compact format. Of course, there's a bunch of support code behind that that you can just don't look behind the curtain. But it's nice. So that's all. Thanks a lot, everybody. Let's have lunch.
Rust 2018 brought with it “non-lexical lifetimes” (NLL), a big overhaul of how the borrow checker worked that allowed it to accept a lot more programs. And yet, we’re still not accepting the full range of programs that was envisioned by the original NLL RFC – this is both for performance reasons and because the formulation didn’t turn out to be as flexible as originally thought. Polonius is a “reframing” of Rust’s borrow checker. It works quite differently than the original system. Rather than tracking how long each reference is used (its “lifetime”), it tracks where each reference may have come from (its “origin”) – and it does so in a very fine-grained way. As a result, Polonius is able to accept all the programs that the NLL RFC envisioned, and a few more! We also believe it can be made more performant (though that has yet to be fully proven out). This talk will explain how Polonius works in a simple and example-driven style. It will help you to gain a deeper understanding of how Rust’s borrow checker works internally. --- Note from the conference: The slides for this talk were not captured and we've re-recorded them after the fact. Some of the visuals did not work perfectly, and for that we apologize.
10.5446/52190 (DOI)
So today I'd like to talk about the current Rust ID story, more specifically underline some of the story context behind it, and also talk about the tools themselves, which includes the design decisions behind them, and their current capabilities. But first, a couple of words about me. Most of us spend most of the time on GitHub and on Twitter rather than in real life, so chances are you probably recognize me more by my handle and by my avatar rather by my real name. I'm the maintainer of the Rust language server. I started my journey with Rust in 2017 as a Google Summer of Code student, so there I worked on the RLS itself under Mozilla organization. And I'm the member of the official DevTools team, and the DevTools themselves are my primary area of focus for me in Rust itself. So the content will be split into four parts, beginning with early days, which I consider to be from 2014 to 2016. And because I'll cover a bit of dates, I thought it would be good to visualize them on a timeline such as this to put things into a little bit of perspective here. And so it would be good to establish a good point of reference for all the dates that will cover. And I believe a good point of reference would be Rust 1.0, which is like the first stable release which guarantees us that once you write our code, when we upgrade our Rust tool chain, we're guaranteed that the code won't break, which was often the case in the Rust 0.x releases. However, the first language smartness tool was actually created more than a year prior to that, initial comment dating back to March 2014. The tool is called Racer, which some of you may probably know. And so I did some good archaeology and pulled the snippet from one of the initial comments. And as you can see, the syntax remembers different days. So for example, we have uint, which is now usize, if I recall correctly. And we have syntax for tilde t, which is now a box of t, respectively. So how Racer did this was it used the internal Rust c parser, so call it syntax. A parser roughly is a program that transforms your source code into some form of a representation that then can be further analyzed and worked on. And on top of that, it actually bolted a very simple name resolution system. And once you think about this, we can use the name resolution to power one of the stable ID features, such as jump to definition. So in this example, we have a very simple code. If we were to resolve a path a, column, column b, we need to scan the code, see what a refers to. So in this example, this would be a module a. And then once we resolve that part of the path, we move on to the next one. So that's b. But then we find the b and the nested scope that's introduced by module a. And once we do that, once we resolve fully the path, and that's obviously oversimplifying, but for the sake of this example, once you have the b, we know that it points to struct b, and so we can jump to that definition in our editor. It also included a completion engine. So we have two types of completion, very simplifying again. One is scope completion. So that's very similar to the name resolution, but instead of actually stopping at one segment and resolving that, we get a list of all the valid possible scopes that are further introduced in that scope, right? But also there is a dot completion, which is a lot more interesting. But to actually correctly do that, we need to have a fully functional type system, because a structure can implement a number of traits, each pulling their own methods. And so to get an accurate list of methods that are callable for a given structure, we need to know which exactly traits are implemented for that structure. So you can see how this can get a little bit more messy. So Racer did have a very heuristical approach to that, but nothing very complete in the Rust C sense. And it was also designed as a CLI tool, so it wasn't like a demon or language server as we do have now, most of the time. Couple like a year later in September 1, 2015, that's the birthday of the IntelliJ plugin for Rust. And then the developers were faced with a decision whether to reuse what they have, so that's Racer, or to reuse or to use the set of tools that are introduced by and developed by JetBrains for their IDs to do their own support for the language. And in the end, the decision was to reimplement their own kind of compiler front end using that set of tools. And a fun fact, it was actually written in Kotlin, so that's JetBrains JVM-based language. Which wasn't even 1.0 at this time. So how they actually supported the generator, the compiler front end, is they use the parser generator called GrammarKit from JetBrains. And a quick refresher or introduction what parser generator is, is that it's a piece of logic that accepts a grammar specification, so that would be a set of rules that define your language. And on the output, you typically get two programs called lexer and parser. What lexer does, simplifying, is that it accepts, transforms the stream of characters and outputs a stream of kind of abstract tokens. So in this example, we have let token, which is a keyword, then we have identifier token, which is a, and so on and so forth. And parser does, is it accepts that stream of tokens and on the outputs, it outputs, sorry, it outputs syntax tree, some form of representation that then tools and the compiler itself can use to perform further analysis. And these can be, syntax trees can be concrete or abstract, but I won't go into that right now. So having that, they initially had a very limited set of features, but as time went on, they added more and more. So initially, they started with a basic name resolution, so that's very simple, similar to what racer did. As time went on, it obviously got improved and expanded upon. They also supported indexing. So for example, when you take a final references feature for ID, what you need to do to actually answer that request, you need to walk the entire crate and see what's, you know, from the entire source code, what references that, you know, a given definition. So it does that. It caches the result and that's basically roughly what indexing is. They also introduced a cargo integration support. So now IntelliJ plugin knows what a cargo.toml is, how your project is structured, that you can have a workspace of many crates. And what's interesting is that initially they have a very heretical base type system, but in 2017, they re-implemented it in Kotlin to be like a real deal and maybe not on par with what Rust C has, but something very, very similar, at least aspiring to be complete. But leaving that for now, let's move on to the official Rust ID story in a way. So this all started when Rust RFC 1317 dubbed ID support was the PR containing it was opened. So that was in October 2015. And it covered mainly two things. So one is what, how an ID tool looks like, what it does, how does it communicate with editors and so on. And the other one, the second point is how we can actually extract the data from Rust C itself that powers all the smartness behind the tool. And after long debates, a couple of months later, the RFC was merged. And so the bike shed has been resolved, the name has been picked. So the name Rust Language Server was picked. But actually, the discussion about the data was revolving around two axes, whether we lazily compute the data, whether can we adapt Rust C to have this incremental infrastructure. And then there was no resolution. So what was agreed upon was that Rust C should dump all the data, all the smartness data, and then we can improve the status quo incrementally from there. So coming next to the RLS, which is from between 2016 to 2018, and that's obviously not to say that development has been stopped. It's just that it was most active and defining development going around there then. The prototype was initially created by Nick Cameron and Jonathan Turner and at the end of August 2016. And what's interesting is that actually for a second, it actually pulled Tokyo and Hyper. So the protocol was based around HTTP rather than what was eventually used the Microsoft's LSP, which is Language Server Protocol, which didn't exist at a time when the IDE was opened and debated. So a couple of days later, it was officially renamed and officially announced at RustCon 2016. And it was designed to be the IDE tool to rule them all, basically. So whichever request you wanted to request from that tool, it had to be able to answer that. So in order to do that, it actually pulled a couple of different tools, such as Racer for the auto completion, Rust Format for the code formatting, Clippy to integrate with the external linter, and Cargo to orchestrate builds and to detect project layouts. To come back to the data and how we can extract that from the RustC itself, an additional pass was implemented. It was enabled via the dash Z save analysis flag. And what it meant to do is it meant to save the analysis pass results to a separate file, in this case JSON, but unfortunately the name kind of stuck and we called the data itself save analysis. And that's to show that RLS is actually decoupled from RustC itself, so it does not rely on the compiler internals as much as Clippy, for example. So whenever we break stuff into RustC or change the internals, the RLS thankfully does not break as often. And since the compilation is always local from the point of view of a given crate and RustC does not know how to compile multiple crates at a single time, at the same time in a single session, we need to lower these multiple JSON blobs into a single coherent view that only then can be used to query by the RLS itself. So to sum it up, the mode of operation is as follows. Our user changes input, so for example, they modify a file. They instruct RLS instructs RustC to regenerate the JSON blob, which only then is lowered down to the database, which only then can be used to query by RLS. And you can see it takes a bit of time to fully complete the cycle. So to quickly explain the data format that powers the save analysis and the RLS, it was expanded upon, which means the feature set of RLS also has expanded, but initially it had only definitions and references. So for instance, if RustC encounters like a struct definition or an enum definition or function, it records all of that into the JSON. And also when it walks the crate and it sees references and uses its name resolution pass to find references to an actual definitions, it also records that. So we can use that to power jump the definition. We only need to query a reference in a given location at a cursor. Same thing we have with final references, but we need to traverse the inverse relationship, relation of the references too quickly. For a given point of file, find all the references, or they all refer to single definition. But as he said, as time went on, we expanded the data and we included stuff like trade hierarchy. So we recorded every input block for a given definition. And with this, we could quickly answer queries such as which trades does a given definition implement or roughly what are the methods that were introduced in input blocks introduced for a given definition. Also it saw some niche cases like de-globbing. So when this feature was designed, the glob imports were kind of frowned upon. So what we did is we recorded which exact definitions have been pulled by Rust C itself and then if a user wants to replace that with actual imports, they can just do that on the spot. Interestingly enough, at one point, it also had the lexical scope borrow data that was when borrower checker was based on the AST. And that was to visualize all that scopes in the editor because borrower checker was deemed to be a hurdle for the newcomers. So we wanted to be able to visualize the data so they can more effectively reason about it. But in the end, we knew that we will want to move off to the non-lexical lifetime models. So in the end, we just abandoned the data altogether. However, this brings me to completion and that's a good question, how we can model completion in that save analysis model. And the answer is we can't really because we'd have to record every possible completion for every expression that is in the crate which blows up the save analysis format considerably. And if we, well, we can see that if we have more data, then we need more time to emit that to a file. And when we have more data in the file itself, we need more time to lower that into a single coherent view, as I was mentioned previously. So this unfortunately increases the latency that's perceived by the user. And also it's very hard to model because whenever user, imagine the user types like let A equals expression dot, how RLS works is it instructs the compiler to redo the compilation on that crate, but that does not parse, right? It's not a valid Rust code. So Rust, he just goes, yeah, I don't know what to do with this. And it just omits it and skips it altogether. Which brings me to RLS 1.0 release candidate which was announced in the August 2018. So if you remember that times, you can remember that it was not well received. People did agree that it was not ready for 1.0 with main pain points being the high latency and lacking completions at the time. We kind of sidestepped that issue by just saying, okay, RLS version is the same as the RLS version. So we know we're kind of officially 1.0 and pass it at, but there is work to be done on this end. And so we work continuously to reduce latency and improve product support and in general improve RLS as we go. It brings me to Rust Analyzer, which is a very interesting project that had seen its birth in 2018, it all started with RFC called LibSyntax 2.0, which was open at the end of 2017. And what this RFC proposed is to introduce a parser that would output not ASD, but a concrete syntax tree. So that's literally how the Rust source code was parsed, including comments and any other tokens. And you can see how this can be useful for different tools, for example, for a Rust format or even ID that needs to answer like requests, for example, extract a given function outside of the scope, including the comments and whatnot. In the meantime, the incremental infrastructure has matured and the RFC, which was not originally founded for or considered in the original RFC in 2015. The first stable release of incremental compilation has landed with stable Rust 1.24 release, which was in February 2018. And the design turned out to be very good, and so it inspired another crate called Salsa, which, yeah. So the initial comments dated back to September 2018, and it basically models a function, a query, which are functions from type K to type V of two different types. One type is input, so we can think of this as a function that pulls value outside from the environment. And another type is a pure function, which can be thus, then wise or cache, because a pure function, the result of a pure function only depends on the input. We can see and practice how this can work in the ID setting or the compiler, for example, if we type check a given definition or a body, we don't need to do it again if something else changed in the source code. That does not affect the definition for which the work has already been done. And so with this combined, with the research work that went into Libs and X2 and Salsa, we actually had a pretty functional tool. We have a tool that was able to parse Rust source code at output syntax tree that then was analyzed using the Salsa database. One of the main goals for the Rust analyzer was to explore how an ID-ready compiler frontend might look like, especially at emphasized laziness heavily. As I said, you don't need to, with this lazy approach as Salsa demonstrated, we don't need to redo most of our work comparison in contrary to the RLS, where when you modify something very slightly, you still need to recompile everything and lower everything into the single coherent database. One of the goals is to reuse what we can. So we do not aim to reimplement the type checker. Instead we want to reuse Chalk, which is reimagined sort of type system for Rust C that's implemented outside of the actual compiler. And another goal is to be completely separate from Rust C, which is both good and bad. So Rust analyzer fully compiles on stable, and this greatly improves the contributability factor. So to give you an example on my fairly recent laptop, building Rust C from ground up with debug settings takes an hour and 20 minutes, so you can see how that can scare off potential contributors. But on the other hand, we don't have parity with Rust C, so we don't accept the same set of programs that Rust C does, and we don't get the same diagnostics. So to quickly recap, the mode of operation comparison to the RLS is that whenever a user changes inputs, Rust analyzer only informs the salsa of that, and then whenever a user asks something of Rust analyzer, only then it asks the salsa to do it, thus doing the minimal amount of work necessary. So the progress is that Rust analyzer mostly works, which is great. It actually handles the Rust compiler repository itself, which I consider to be a great milestone. And recently macro expansion has improved by quite a lot, so we support a considerably big subset of real life Rust code, including Rust standard library. But on the other hand, we don't still have diagnostics, and we sidestep this by running cargo check on the site, and also stuff that needs indexing does not work it, so for example, final references feature. Now around the beginning of 2019, a working group dubbed RLS 2.0 has been formed somewhat organically at the Rust All Hands, which was in February, and the goal was to be an experiment to see how Rust C itself can be adapted to be more of an IDE ready in that sense, and they based the work on the Rust analyzer, what we have already. One of the other goals was to library Rust C as to find in how we can split and modularize Rust C into more fine grained and pure crates and interface, and contrary to what we have right now, which is just a bunch of crates that just happen to work. And also somewhat of a secondary nature is how we can bridge those two tools as to not to be so confusing for the end user when it comes to the IDE support. So the first fruit of the work for the working group was the Rust C underscore lecture crates, which was merged upstream in July 2019 this year, and what it actually did is the PR was open that pruned all of the lecture code that relied on the internals of Rust C and replaced it with fully stable compiling code, which actually means that we can now share the same lecture code across the compiler itself and the Rust analyzer, which is great. Which brings me to today, what do we do today? We obviously continue to improve the existing tools separately and together, so separately Rust, language server and the Rust analyzer, but we do try to unify the IDE effort as not to be so confusing for the end user. For instance, RLS and Rust analyzer had their own virtual file system implementations or their own LSP server implementations, so we plan to cut down on that and share hopefully more code rather than less. And Rust C itself sees a lot of cleanups internally, so who knows? Maybe something like name resolution will be extracted into a separate crate in the near future. This brings me back to my first opening question, are we IDE yet? I'd say not yet, but we're very, very close, so you'll have to just bear with us for a little bit longer. Thank you. Thank you.
Ever since appearing on the Rust 2017 roadmap, IDE support has been and continues to be a highly-requested feature that should boost productivity when working with Rust code. Despite the landscape shifting a lot during these last 3 years, including a proliferation of new tools and improved integration between tools, it feels like the Rust IDE story is not yet complete. This talk will explore the current status of the official Rust Language Server (RLS) and Rust Analyzer, which is a main focus of the official RLS 2.0 compiler working group.
10.5446/50977 (DOI)
Check, check, check. Okay. Hope everyone can hear me. Welcome to the surprising science behind Agile leadership. My name is Jonathan Rasmussen. I'm extremely excited to be here with you guys today to talk about Agile and leadership and why we do some of the things we do when we're working together on certain types of projects on teams. Like why are we driven to do certain things? And I'd like just to start with a story. This is a kind of an old business tale. There was once a gentleman by the name of Charles M. Schwab who ran and owned the second largest U.S. steel company. This is in the sort of late 1900s. And him and his managers were having a hard time increasing production in their steel mills. They just couldn't get more out of their mills. They were being out produced by Andrew Carnegie and some other guys. So finally Charles himself kind of got fed up. He came down to one of the steel plants and he asked the manager what the quota was for the day or how many heats of steel had been produced to the mill that day. And the manager said six. So Charles Schwab took a big piece of chalk. He walked up to the entrance of the factory floor and drew a big number six in the middle of the floor and then just walked away. So that shift ended. The nighttime shift came on to start their work. They saw this big six in the middle of the floor and they said, hey, what's up with this six? What does this mean? And they go, well that's the number of heats of steel the day shift produced. And the big guy was here. And Mr. Schwab himself wrote this down. So the nighttime guys were like, okay, off they went to work. They worked a little harder and at the end of their shift very proudly they came out and they rubbed out the six and put a seven. They out produced the daytime shift so they were happy. Following morning the daytime shift guys come back. They see their six crossed out and there's a seven replaced there and they go, what happened? Well the nighttime guys produced a little bit more than you guys and they're like, okay, now what's on? So now you can imagine what happens the daytime guys, they do their shift. They work extra hard they come out and very proudly they erase their seven and put a ten. An astounding ten heats of steel that had never been done before. And that became the new quota for each shift. Now that's a simple business maxim, a story we could all share and we'd probably hear in some traditional NBA classes about healthy competition and things like that. But it's interesting to reflect on that. Would that work today in the type of work we typically do writing and building compelling pieces of software and services? In other words, like in Agile we have this concept called velocity where we try to measure the productivity of how much a team can produce every iteration by how many stories. Something we don't typically recommend though on Agile teams is comparing velocity to long teams because it's not a true Apple and Apple comparison. We just know that. We're not producing heats of steel or writing software. Also how do you measure something like creative work? I don't know. It's another difference between industrialized process type work that Charles M. Schwab is doing at his factory. So the type of work we do daily when we quite often don't know what the solution is going to look like, we're still discovering the problem. We're still trying to measure something to measure. And why are our estimates always so wrong? As an industry we've done a terrible job of managing expectations around what we can deliver in terms of time and budget. But there's reasons for that. And we're going to look at that. So I've been in the Agile space for some time and I've always been very comfortable explaining what Agile leadership is. You know, it's terms like this, a servant leader, self-organizing. The team let themselves organize themselves. Let them be empowered. They'll figure out how to do the work. Those are sort of Maxism things you'll always sort of hear associated with an Agile leadership model. Accountability. We want to induce behavior, not compel it. But something I haven't been that great about describing over the years is why. Like why do Agile teams want to work this way? I can get into debate. We might have a difference of opinion. A traditional manager with sort of this Agile way of working. But I was never great at explaining the why. I could explain the what and I kind of knew how the what worked. But I could never explain the why. And it wasn't until I came across this book by Dan Pink, the surprising truth about what motivates us, that I finally came across a little bit of a Rosetta stone or an inkling about the why. There are very good scientific reasons. Scientific reasons for why we do some of the things we do. So I was able to get out of this opinion based discussion now. And after reading the book a lot became more clear to me around the why of the Agile leadership. So if you can bear with me for 10 short minutes, we're going to watch, I'm going to save you reading the book. This video explains it very well. We're going to watch a video and then we're going to use that as the basis of our conversation for the next few minutes. And this is just what we're going to cover. The science behind Agile leadership, how the nature of work has changed, and then three takeaways to help you lead and manage your teams when you go back. But let's just quickly watch this video here.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Not everyone is a fan of the self directed self organizing team. It flies in the face of traditional project management, and often conflicts with the traditional organization model. The benefits of self directed teams however are too big to ignore and now we have scientific proof as to why. In this new talk on agile leadership, I explain: how and why agile leadership works, the science behind why so many choose to work this way, and the impact it’s going to have on you and your organization.
10.5446/50992 (DOI)
All right, everyone, we'll get going. So my name's Tate Moddy. I work for a consulting company down in Australia, and my focus is on web applications. What this talks about is a number of the lessons I've had over the years of working on large public websites and how as soon as you put something on the internet, it doesn't matter how much you've tested it, it will break straight away. So you've made it into production and it's kind of a question of now what do we do? So it's a bit of a kind of DevOps focused talk if you're familiar with that term, except very much talking about application code. I'm not talking about deployments or PowerShell scripting or any of that sort of stuff. A lot of the techniques I'm going to show you, well, there's going to be sort of three main areas I'll walk through. They're quite simple in how they actually work. They're just very simple gets, posts, things like that, but I find them very powerful. They're also not specific to any particular web framework. So I'm going to be showing you some stuff in ASP.net. You can go and apply this in Ruby on Rails, Node.js, whatever you want to use. And they're also equally applicable to sites of all sizes, small ones, large ones, and also both internal and public websites as well. So what I'm going to kick off with is I've got a bit of an application that I'm running here, which is mysite.localtest.me. And this is a bit of an auction website that I've been building. Here's the story to it. So we can see the time's ticking over. I've got a login button and a bout button and that sort of stuff. Now the time that's ticking over here, we're showing that clock because it's an auction site and we want people to know what our official time is. And that's obviously driven by JavaScript. So I always like to approach JavaScript with building everything, first of all, in a way that it works without JavaScript and then adding JavaScript as your progressive enhancement on top because even if users do have JavaScript, they don't have it while your JavaScript is still loading. And I also find it makes things easier when you're building it. Now in this case here, if we were running in production, if I go down and we look at the bottom, I'm going and combining all of my JavaScript into a single file. All works well enough. The problem is if I have a problem that's happening only in production or in an environment where I'm running combined script mode, what do we do? We usually end up going and changing some for a conflict flag or something like that and we have to then let that environment spin back up again. That's not that nice. So one of the ideas that we developed on one project was the idea of actually being able to pass up just a query string flag where we can just go the JavaScript mode is off, simple as that. And I have no JavaScript. Or I can say the mode is dev for development where I do get JavaScript. But then if I look at my page source, you'll see I'm getting the three different files exploded out separately. Incredibly simple. Means it can work in any environment. What this also opened up for us was the ability for us to actually have functional tests that targeted our non-JavaScript scenarios as well, which was quite easy to do, rather than going and trying to automate disabling and enabling JavaScript in the browser. In terms of the impact, one of the lessons that I kind of personally got out of this was starting to think about these things in that you can go and have this flag on a big public website and there's no impact. Even if a user finds it and knows it's there, all they're doing is getting the same JavaScript file split up. So it's actually quite safe to do. So that's simple enough. But as soon as I go to another page, I lose that query string key, obviously. So this doesn't work for everything. So the next approach that we had, if I switch over into Firefox for this, is I can actually jump down here into my cookies and I can go and create a cookie where I'll say something like config.js mode, set the host to.my site local test, make it a session cookie, and then I can just say the.js mode will make it off. I'll just dev so we can see it. Then when I go out of that, now every time I go and load any page on the site, I'm getting my JavaScript getting split out. So I can go and walk across multiple parts of the site. Now JavaScript is only really one part of the story. You could obviously do all of this by just enabling and disabling JavaScript. But what we're starting to do here is actually generally just keep all of our config settings in a way that can be overridden. Now we actually, so this here is about a specific project I worked on, we moved all of our config settings as much as we could out of the web config and actually into a database. We just had a database table where we had the config key, the value, and a TTL for it of how long it could be cached for across all the different servers. The advantage that this gave us was that we could go and turn things on and off across different servers very easily. We also had a fourth column where we could target config to a particular server. So then what we did was we whitelisted config settings to be able to say, okay, these ones are allowed to be overridden from the client. So the next scenario that this creates is, well, what happens if we've been working on a whole new section of the website and marketing wants it to launch at a particular time, like exactly 10 a.m. on Monday morning right in the middle of peak traffic because that's when an ad campaign is kicking off. We wrapped all of this type of stuff in feature flags. So if I have something like config secret product launch, and then I can go and set that to true, then when I go and load my page, I've got a new links shown up at the top, and the page becomes available. So rather than actually going and deploying features out on the date and time, what we'd do is just turn the config flag on. Now from a server consistency point of view, we wouldn't have actually used a Boolean like I just did here. What we actually used was always dates and times so that the servers could recache their config information every couple of minutes, and the server times are all consistent. So what we'd have in the config setting is we'd say this page or this feature becomes available at 10 a.m. so that they'd all turn on simultaneously. So we turned the links and everything on. From an operational perspective, it also meant that we could then turn features off again later. So there was a couple of times where we ended up on national TV where there were segments about the site, and everybody would go and jump on, and if you'd watch the firewall, and we'd literally see 50,000 extra sessions get added in less than a five minute period. So during that time to control load on the site, we'd actually go and turn features off like stop you from being able to go to the second page of search results. You just see one page, and then we'd turn off the ability for you to change the number of items per page. So you just got 100 results no matter what to increase the effectiveness of our caches, and we could do all of that through these database based config settings that also allows us to then have client side overrides. So let's say we've then gone and deployed this out to production, and the new features there. Now marketing want to be able to get in and actually check that the feature is good to go and that all the content's ready because even if it's functionally right, they want to see that the data in the production database is showing up properly before they launch it. Now they're not going to go in and create cookies, and you're going to have security impacts around that as well. So the other thing we built was this tool which I called Configurator. So we were running on my site.localtest.me. This tool runs on configurator.mysite.localtest.me. So under the same parent kind of domain. And what this tool does here is I can actually, you can see it's showing I've got a JS mode of dev, is what it's overridden to. I could say turn that off, and I can turn my secret product on. I go and reload the page, and those settings have taken effect. Now this isn't changing anything on the server. All it's doing because it's in the same root domain is it's just creating those cookies locally and going and putting them in. So then we could actually open this tool up to internal users to be able to go and try out different configurations on the site. And then one of the other interesting scenarios we get into, you'll see one of the keys here, UTC Now. Because we also relied on this very heavily for our functional tests, and we'd have things like, we'd open an auction, we'd put a series of bids on them, and then we'd have to close the auction, except our auctions, if people keep bidding, like a real auction, they keep going. So they only close once nobody's bid for 10 minutes. Now you don't want a functional test that just delays for 10 minutes. So what we could actually do is I could go in here and I can even make this tomorrow. As soon as I blur out of that field, now when I go and reload the page, you'll see that the server time is now tomorrow, because I can actually override that even via a cookie as well. So our functional tests were able to turn different features on and off for what they wanted to test and move the time backwards and forwards, which was fairly powerful. Obviously some security questions around this now then. Even users turn different pages on and off, a little bit scary, having them go and change the clock on a bidding website really isn't going to work. So the other advantage of having this tool is every time our application got deployed, one of the things that our database upgrade scripts would do is they'd generate a new random key and they'd store that in the database. Now whenever you send a cookie override, what we actually do in production, where we have a cookie that's something like config.js mode equals off, there then also needs to be another one, config.js mode dot signature equals and then a digitally signed value based off the key that's in the database and whatever value you're sending up for the override. And then that key would roll over every time we deploy. So obviously not something you can go and compute in your head, whereas using this tool, it could go and compute all of those signatures and then be able to send them up. So whoever we gave access to this tool and it had access to the underlying key, they could do it. So that way there we could even have the CIO could be sitting at home and he could just open it up and turn features on and off because he had access to the tool, which was quite powerful. Any questions around any of that? So yeah, incredibly simple but opened up a whole bunch of scenarios for us. Before I go on from this little sidetrack, the domain name I'm using here, you'll see localtest.me. Now in coding this up as a solution to make this work, I needed to actually have host names because of the way the cookie security works, obviously. What you'd traditionally do with this is you'd go and open up your host file and you'd make that up and you'd point it at 127.001. A bunch of us got sick of doing that. So this is actually a domain name. It's a real domain name on the internet, localtest.me and anything dot localtest.me just points to your local machine. So you can just go and make up host names. The only request that goes off your machine is the DNS hit and then we just point all the traffic straight back to your machine. And then the other advantage of it, so this is actually all the only domain name it doesn't redirect is readme.localtest.me. So that's got all the instructions on how to, well, what it does. But we also actually give you a completely legitimate SSL certificate. So if you're doing client-side SSL development at all and you get sick of those self-signed certificates, we pay for and ship an SSL set that you can just use yourself that works against those host names. It's a wildcard set. All right. So we've got our website into production and we're now starting to getting some errors back. The first thing that we did was built a really simple endpoint, which is just debug slash throw exception. The number of projects I get to, whenever they want to work on their custom exception page, they go in and they break some line of code to make it throw an exception. Just add an endpoint that all it does is throw an exception. And that means you can actually go and test that in every environment. But then what we did, you'll see here, because I'm running locally, I'm going and listing out the full text. On this particular project, and it's something I've done subsequently since, we also write out this kind of error number here. This was a company who had a customer service line and people would phone up and they'd have issues and things like that. Now, the way this error number is structured, you'll see every time I go and refresh this page, the last part of the number is changing. The first half of the number is what we referred to as the error ID and the second half is the instance ID. So what we do when there's an error, re-initializing resharper, and this is all lovely demo quality code, so please don't judge me too much. What we go and do is we get the exception text, which is what's basically anything that's going to be consistent every time this error occurs. And we calculate an event log safe integer hash. I'll explain what that is in a second. And that gives us the error numbers. We basically just take that text and hash it. And in the instance specific text, we go and add all sorts of information about what was the IP address they came from and what were all the headers they sent and their form data and all that sort of stuff. And then we go and calculate another hash off that. And then we just concatenate the two numbers together. Now the advantage of this is from an operational perspective, when we get the error reports coming out of this, we can actually go and group our logs by the error ID at the start and then just say, okay, tell me which error happens most often. And we could focus our efforts there rather than having to go and trawl through all of the different instances. It also meant from a customer support perspective that when we had something like this come in, we could work out what the workaround was. Let the customer support people know or help desk or whoever you've got. And just say, whenever you get a 5.4.1.7.2 come in, that's a known issue. The expected resolution is about a week from now. And here's the workaround. So rather than just having that really generic page that just says something broke, we could actually have a story to go back to our users and give them some information about what they could or couldn't do at the time. And if it was a case where we really didn't know what was happening or is specific to one particular user, they could obviously give customer support that whole number and then using the last part of it, that's guaranteed unique to every single instance, every time an exception gets thrown across our site. So we could just go and look up that particular correlation ID. Now from a reporting perspective, you'll see there's two different ways that I calculate the hashes in here. So for the error number, we calculate this event log safe integer hash. So I'm a firm believer it doesn't matter what your application is or what logging you have in place, you can do all of that as well as long as you write to the event log. The event log is where admins expect to find stuff on the system. So every time we have one of these errors come out, we go and write it into the event log. Now, we also wanted to be able to, in this particular case, we were using SCOM, System Center Operations Manager, but there's heaps of tools that will go and surface information on your event log. So using that error ID, we can actually write that into the event log as structured data and then we can, all of these other tools, you get all of that reporting and analysis for free basically. The problem with that is there's a very limited set of IDs you can use in event codes without going and adding the custom fields. So basically we just go and take the error text, get an MD5 hash, jam it down into 128 bits and then down to 32 and then we wrap it into a range of 10,000 that we can use. So there's a risk there that our error IDs can overlap, but it's fairly rare and hasn't actually impacted us at all. That'll obviously depend more on how you want to go and log things in your particular scenario though. The other thing we did in showing the instance ID, basically we build up this massive string of text that has the date, time, and request, how does the form, all that sort of stuff and then we just call string.getHashCode. Except from a just a raw usability perspective, one of the problems with getHashCode is it returns a negative number lots of times and users aren't very good at reading out negative numbers, they'll just ignore the dash because they think it's some formatting thing, which would make it hard for us to go and find stuff. So what we do in that case is we get the hash code and it's a 32 bit integer and we just wrap it around into a 64 to make it always positive. So just a little approach there. And then the information that we get coming out would allow us to go and actually go and search for all these different instances. So this is the type of information that will end up in the event log. There are 50085 is a favicon not found error. That's another one. So fairly simple locking. So this is stuff which is really a development responsibility to get it out there, makes your own life a lot easier. And then the last part, it's talking about event logs because they're the most interesting thing in the world. ASP.net actually has a really good infrastructure under it, which a lot of people aren't aware of of the system web management namespace where it has these audit events that you can go and use. And it actually has a predefined set of web event codes for a whole bunch of different scenarios. And then any time you're going and raising your own events, it also has a base code, which you then go and add on to for your different types of events. So doing that, you get a whole bunch of native support for out of scum and things like that for monitoring applications. All right. So in talking about going and exposing out these little end points that were useful, one of the other things that we found that was useful to us was actually just going and dropping a text file on the root of the application, which has the specific build that went and generated whatever it is that's running in that environment. Because it made environment traceability a whole lot easier, and it mean we didn't have to go and write it out to the footer or something like that, which was ugly and users could see it. And it also meant that our application doesn't even have to be running to serve the file because it's just a dumb static file. So really easy form of build stamping. In this case, you can see they've obviously pushed a hot fix about eight days ago and is how it's got out there. And then if we look at, there's another one we stamped as well, version.txt, which has the change set that that build came from. Now, in this case, it's coming out of TFS, so it's just numeric change sets. But two different concepts. One, what was the code that we used to actually get to here? And then what was the build process we went through so we can always go and download those exact artifacts. Just made environment traceability a whole lot easier. I'm just going to render my tabs for a second. Cool. All right. So the next thing I wanted to talk about was a scenario that I actually encountered just last week, which I found rather painful and how we actually went and debugged that. And what this was was we launched a website publicly and it had a fairly small number of users. It was only about 3,000 to 4,000 users, but also incredibly high value users because it's an investment management site and they're all worth a lot of money to the organization. And there was a very small subset of users who couldn't log in at all. They'd go to log in, they'd type in their credentials and then they wouldn't get an error message that said your credentials are wrong or anything like that. They'd just end up back on the login page. And we couldn't repro this at all and we spoke to a number of them or they were phoning up kind of customer service and then we ended up getting to someone directly. And they were all using a consistent browser, which was good old wonderful IE8. Except even so, we couldn't repro it in IE8 and there was lots and lots of users in the logs who could log in just fine. So we started to really wonder about what to do next. And the scenario here, if I go and log in, if I put in bad credentials, I get something that tells me my credentials are wrong. If I go and type in good credentials, I just end up back on the login page. I didn't go anywhere. So how do we go and debug this? Is this a scenario type of thing that people have run into? You can't reproduce something like this. Do you have a great solution for it? Cool. So what we ended up doing was we wanted to be able to go and basically say, okay, what headers are going backwards and forwards over the wire? We wanted to see Fidler level detail. But the one guy that we were talking to who we'd kind of established direct contact with him past customer support is this 80-something-year-old guy who lives another state away and we really couldn't ask him to even run up Fidler core. Getting in through the process of, are you on the login page? Yes. Okay. Was excruciating as it was. So we needed a better solution than that. So what we ended up building was basically like a server-side version of Fidler where we couldn't also didn't want to go and record every request flowing through because we'd just generate way too much data. So what we ended up with was this endpoint that we put up on the public website, slash debug slash start trace. Because we didn't know where this guy's computer was. We didn't want to gather everyone's logs. And we made it as basic as it possibly can be. Like there's no cookies, there's no nothing. And I resharp it running on this machine so I don't know how to use it. So what start trace does is there's literally a get and then when you hit the start button it posts back. We have this trace provider. So when I go and do that, it says okay, your session ID is big long number. Now we can get him to read that back out to us over the phone. What we've done on the server is gone and pushed into server state and said hey, anybody that comes from this IP address because we've just gone and captured his IP, just log everything and record it against this session ID. So this is the experience that he'd have. On the previous page we had a nice little message that says hey, you've reached this page, you're going to help us diagnose an issue. In doing this we will see everything, like our operations people will see everything you type in. Are you okay with this? Yes, I agree. Next. So we're in and then he could go into test, test, click log in. We've got the problem and he goes okay, yep, I've just had the problem again. What we were able to do now server side is we had a file that had been written out that we were actually producing using log for net. If I go and open this up, here I've got, so started trace from that host address and then basically it just puts the session ID behind all of them. We can even see here the post request where they started that trace. So I can see okay, well he went and posted to debug start trace. We sent back a 200 response and those were our headers. Then he got the home about page. I've got a 401, got redirected to the login page. So did a get there, got a 200 and then here we go. So we've got raw URL, it's the login page. We've done a post. We can see the request headers coming up. So we can see that he's in IE8. We can see the form, we can see the form body here. So we actually had enough information we could replay this entirely locally. But even so we went and replayed it in Fidler and nothing broke. So we went, kept going down. We've got the response headers here. We're going and issuing an authentication token to him. And that's got a, where are we, 1643, it's 42, it's got a 20 minute expiry period on the token is what we're using. Then we get to the next bit where he gets the about page and we get a 401. But with this level of information, what we're able to see is if you look at the home about get request there, there's no cookie. We sent him a cookie and the cookie never came back. And we had other cookies that were reliably coming back every time like the Google Analytics cookies. Anybody picked the problem yet? Right. So IE8 doesn't look at, even though the server sent, so what most browsers do now is the server sends back a date header and the set cookie which has the expiry date. And they look at the difference between them and then apply that difference to the current machine's time. It might say 4.40pm down here, but if I go and adjust my date time, that's on Sunday. I've rolled my date forward. What this guy had done is he'd bought his computer in Sydney and then he'd moved to Adelaide and he'd changed the time but not the time zone. And there's a half hour time zone difference between Sydney and Adelaide. We have a 20 minute cookie expiry for the authentication token. So as soon as he logged in, he was 10 minutes past his expiry time. So quite a complex scenario to go and work out without any information from the person. But what we've basically got there is what I describe as a server side version of Fidler where we can just give them a URL to go to. They can just click it in an email. They hit start recording and it gives us a correlation ID where we can actually go and do it. To be fair, we probably didn't even need the correlation ID because you'd only really hopefully have at most one of these scenarios you're working through at a time. You're not really going to have five people on the phone working through five different scenarios. But it just gave us a bit more traceability there, which we thought was powerful. All right. Any questions about any of that? Everyone's quite sedate here. I got told that you go to conferences in America and everybody asks questions all the time because they want to give their opinions the whole time. And you come to Norway and everyone just sits there and listens quietly. So the way we go and actually spin this tracing up is actually incredibly simple. When you go to start the trace, we just look up the user's host address of just whatever machine they're on or whatever network they're on and we go and generate a trace session ID out of that. And then all that's doing is just in memory, we've just got a dictionary of just user host address to along with their IP address. This solution here is demo quality code and only has to run on one machine. You could do a very similar thing going and persisting it back to app fabric or a database or something like that and letting that propagate out. And in our global ASAX, ASP.NET has a really nice method of log request which surprisingly a lot of people don't seem to know about. They jump straight to end request. Log request is the perfect point if you have every single piece of information about the request to go and log it, which is quite nice. Any questions about any of that? I was expecting more questions. I'm running nice and ahead of time. The scenario is where you were talking about, there's a couple of people nodded that you had a similar sort of scenario. Would that be sufficient information for you to solve that? You were nodding? Something different. We had native components running on server generating data and usually throw up somewhere in the middle of the night. Right. Something like the air command action stuff on the server and the web application. So server side things dying. We had a similar thing on this same project where we pushed a build out to the farm and all of the out pools just started recycling on every server that the new build was in. The problem is we didn't detect this until we had actually rolled out the entire farm. The way our deployment strategy works was we had 40 web servers. We'd take two out of the pool. We'd upgrade them. We'd put them into the pool for an hour. We'd monitor error rates and then as long as they were sufficient, it would then automatically roll out to the rest of the servers. All nice and automated and everything just ticks along. The problem was an hour wasn't long enough to surface this problem and we were in a roll forward only strategy. So we could push out new fixes very quickly but it was very hard for us to go back because we were mutating data and there's lots of real time data. And what it turned out to be was actually a particular piece of data on one auction lot that people could look at would cause a stack overflow and just tear down the entire farm. So the way that we actually ended up diagnosing that was first of all, we didn't really have an easy way to go and hook up like Windy bug generating a dump very easily. We didn't want to go and do that across our whole production environment in an automated way every time the process crashed. So what we did actually do was we hijacked our deployment process at the time and we created a new build that we had because our deployment process which was Go, it's a ThoughtWorks product, it already had access to all the boxes. And what that allows us to do is actually just run the script in a particular box and it would just go and snapshot all of the processes at that point in time and they were crashing often enough that we could then just pick a box that was slow and go right, go and snapshot that one because it was in the middle of a stack overflow. So yeah, it was just another way that we were gathering server side information and dumping it out to disk. And then another scenario that we had was actually a performance related problem where we were struggling to understand why something was only slow in production and not in any other environment. And we couldn't run a profiler in production because we're dealing with millions of requests coming in and we couldn't generate enough traffic on a single box to cause the problem to ourselves. And we tried doing that in UAT and we couldn't run a profiler on a production box just for the impact and what it was going to do to our production environment. So what we actually ended up doing for that is there's some really low level tracing in Windows ETW which is Enterprise Tracing for Windows, Event Tracing for Windows, something, E Tracing for Windows, whatever that is, which actually has a whole bunch of ASP.net and IIS providers in it and it's incredibly efficient tracing framework that goes and basically spits out a binary file format of just every single event that's happening for the candidate group you set it up for, which we then able to convert into CSV, which we could then pull into Excel to go in group and you basically get every single event in the ASP.net pipeline which allowed us to work out which module in our case was causing the slowdown. So that's another tool a lot of people aren't aware of, ETW, because that gives you a mix of native and.NET tracing and everything as well. In the end the lesson out of that one was not using the network layer for any, or it was basically an application level decision. What we had was, and this is where we completely came undone, how's this going to work? What we had was a series of 40 front end web servers and a load balancer on the front here where we'd bring requests in and then we had two search index machines that would go in query and on each of these we'd run two copies of the index, one on port 16100 and one on port 16101, because we didn't have, so I'll explain the reason for that. Then we had another load balancer in here. So the web boxes had hit this and then they go out and hit the search boxes. Now the reason we were running dual indexes on each box was that we didn't have live updates at all. So in order to go and update the index we'd build the new one, this was an old version of Indecker, we'd build the new one and then we'd sort of spin it up and then we'd have to turn the old one off. Except there was a start up time to swapping over the indexes. So what we'd do is we'd kind of alternate backwards and forwards in the one box between port 100 and then as soon as it went down we'd go to port 101. We'd query that five minutes later, they'd toggle backwards and forwards. And we were doing this at a network layer where the application code, at the time we thought, oh this is brilliant, we'll just, if the index is down we swap to the other one because it also gives us failover and it just means we can just toggle stuff on the box. We don't have to go and notify all the web servers what's going on because we didn't want to have to have some way to message all 40 web servers and tell them to swap index. The problem was, and this was our performance issue, when the port went down TCP tries for a while to go and set up the connection unless it gets an active refusal. So it sends a packet, it waits three seconds, it sends another one and then it waits nine seconds, then it waits another 11 seconds. And in the way that this particular firewall had been, this particular load balancer had been configured, it had also been set up as a firewall which meant that it was never returning the reset packets back. So our box didn't know that the connection had been actively refused, so it got into this three, nine and then 21 second retry. And by the time that period had actually, it spent so long retrying that it always actually ended up staying on the same index because that index would come back up within that period of time, which we didn't realize until afterwards. We looked at our logs and went, wow, we were never actually using the second index. And this is why it was so slow. We were waiting for everything to keep on, up again. So the lesson out of that was we were using a, we were making an application level decision of we were toggling resources on and off, except we deferred that decision basically to the network layer, which was the wrong layer. So the solution that we ended up doing was we built a software based proxy on the front of these boxes, which was port 100. And then we had the indexes behind it, 101 and 102. So then what the actual indexes would do is they only had to send a message to that software kind of router, which was just a switch and say, hey, by the way, we're now moving to index 102. And then it would just flick all the traffic that way. So that way the web boxes were actually able to also maintain persistent connections through to kind of different instances of this software based router, which kept us nice and performant. A lot of this though, when we got the EW data, the way that this got identified as the problem was before we actually got into all the detail of drilling through the logs and trying to get profiling and everything, it was just looking at the actual kind of performance. And what we did was we had a histogram of three, I don't know, five, nine, 21, 25 seconds. And what there was was we plotted all of the request speeds, and there was kind of a big cluster around here, and then a big cluster here, and then a big cluster there. And the particular, the way the numbers went up of three, nine, and 21, as soon as one of our IT guys saw it, he just went, that's a TCP issue, because they're just the common retry times. So if you have something that takes three, nine, and 21, it's a network issue. Just go straight to the network. It took us something like four weeks to actually diagnose that problem. When you're dealing with a million requests coming in in production, it's very slow going and profiling these things out. One of the other things that we do in another project that I'm on is we use a bit of JavaScript to go and monitor our clients. Sorry, yep. We had an issue at one point where some users weren't able to send messages on a page, and we weren't getting any exceptions, but the user were getting exceptions. We were logging stuff, but we weren't able to log that. It turned out to be because it was a dangerous request. So as a bridge-taker, it wasn't working in that particular process. So when it's a dangerous request, our modules aren't run, nothing is run in the whole project. So do you have any ideas for how it has to, if there's any way to actually do it when a dangerous request has been made? So what you're talking about is request validation. And where request validation gets triggered is as soon as you go and touch the query strings or the form collection or anything for the first time, it runs through and it validates it. What they've actually done is they've done a lot of work on that in kind of recent versions of ASP.net. I can't remember which. I'm not going to go actually one last check. HTTP runtime object or node. There's a request validation mode in here, which if you've got an existing app you've been upgrading in ASP.net, it will be set to 2.0. You can also go and update that number there, which is better around kind of MVC applications in particular, because what that will allow you to do then is under request, there's the validate input. There's two extra collections. There's an unsafe form and an unsafe query string, which you can then go and query directly if you want. Alternatively, you can, in this request validation mode, you can actually go and turn it off at a server-wide level, sorry, at an app-wide level, which you haven't been able to do previously. You had to do it on kind of a page-by-page basis. That's actually what I do on most projects now, anyway, where you've got rich content coming up from users, because if you're using Razer and HTML string, you're encoding by default anyway, you have to explicitly go and not encode something onto the page. I think that's fairly safe. If you're in a web form's world, some of the other updates they've done around this allows you to turn off request validation for individual controls as well. It'd be worth looking into that. There's a heap of different options in here about you can actually put in your own request validator. There's a request validation type in there, which you can go and put in your own validator, and that way you could wrap the existing one if you'd wanted to keep its behavior, but go and audit stuff out. There's a few different options there. Any other questions? Cool. Moving on to the client side, one of the other things that we've been doing on one of the projects I'm on is there's a nice little bit that showed up in... Hold on a second. As soon as I get the right API, I'd blame Jetlag and the 21 hours of flying from Australia. To my JavaScript capabilities. Um... Oh, man. Um... Sorry, I'm not googling for it. I'm just getting a screen. Ah. Ah, I know what I'm doing. This is the object I want. Cool. So in the browser, there's a new object that most of the browsers are starting to expose, which is window.performance.timing. That's missing the.performance part. This is an interesting object with all of the raw data that the browser captures around lots of different client-side events. So where you'd normally get this information coming out in your waterfall diagram in Fidler or PageSpeed or something like that, you can go and capture a lot of interesting things in here. How long did the DNS look up take? How long did it take to establish the TCP connection? Which is information that you can't gather yourself in your own testing. Because it all depends on where the users are and what connections they're on and how crappy their ISP is. So we actually go and aggregate a lot of this data because we're really interested in performance. So we can understand stuff like, you know, what most of our page load time is actually getting slowed down by the DNS is horribly slow. Maybe we should go and actually add an extra DNS server in that particular region or something like that. And look at kind of tuning our TTLs on the DNS and all those types of things. Recently as well, Google Analytics have actually added support for you to, if you want, you can just go into your Google Analytics account, tick a box and they'll start recording all of these client-side timings for you as well. And this is the API that they're using to do that. Unfortunately, they did that after we'd built all of our own aggregation framework for it. One of the things with the Google Analytics stuff, if you go in to use that though, they only serve it out to a percentage of your users, which is normally 5% by default, and they won't give you the reports until you have like a useful amount of data in there. So if you have a lower traffic site, like sub 10, or I think it's something like sub 20,000 a day is their recommendation, you want to go in and actually turn that percentage up, which you can do in your client-side script. And then they'll actually go and aggregate all that and put it into a nice report for you. Awesome. I broke it properly. Twice. So particularly if you're in any sort of public website where you've got that diverse set of users, it's incredibly useful information to have to be able to make informed decisions about it. There we go. So in our Google Analytics script, what we do is we push a setting or a config value where we say that our sample rate is, we actually want to sample every single user who hits this. So the impact of that is once the page is fully loaded, it sends up an extra, Ajax request back to Google with says, here's the information for it. It's just a little bit of extra traffic on the user's browser, but that's after the page is loaded. It doesn't really have any impact on the mother than bandwidth cost, which we deem acceptable. Funny story about bandwidth cost actually though. There was one site I had once where it went live and we ended up with an infinite redirect problem. And the way we learned about this was we had a customer who phoned us up very angrily because he'd gone away for a week and he'd left his, our site open and our site had sensitive data on it. So what it would do is after your session timed out, the JavaScript would actually kind of monitoring the cookies and go, that cookie is not valid anymore. And it reload the page so that you are no longer on your authenticated page. And there was a problem in this in his particular browser and he'd gone away for a week and had left it open. And he was on a 100 megabit cable connection, which then just reloaded nonstop for a week. And we'd pushed 24 gigabytes of HTTP traffic down to his connection out of our server farm and exceeded his quota. And then also given him something like a $300 broadband bill. So we had to send him a gift voucher for that one. Unfortunately, we had enough server capacity. We didn't notice. Like, oh, that's where that 24 gig went. Not recommended. All right. So that was actually, they were all the different scenarios I wanted to go through. It's a little bit kind of disconnected there, but just a number of different tips and stories and things like that. So is anybody questions or anything else they want to discuss? Yeah, Thomas? Any plans of open sourcing server-side Fidler? Well, actually these, so this demo app here complete with the configurator and the server-side Fidler and everything is at hg.tath.am. Slash. Somewhere. Devops hookstammers. Okay, must be a private repository. So I'll expose that. So that has the code we've been looking at today. So that's that. If anybody wants to download it, I'll make sure that that's open after the talk. It's horrible. The demo quality code shows how something works. Don't copy and paste it. Or if you do use it to help you diagnose why your production just crashed and died. The implementation of what it sends over the wire is right. And it doesn't have any of the signing stuff in there for the config approaches. I would like to write a blog post more so than probably do an open source library of how we wired all that together because I keep redoing it on every project now. Yep, so the intermittent performance issues was what we had here. Because this only happened every five minutes. So the first thing I found that is just so valuable. I always used to just jump straight into all the detail of saying give me every log. I want to know what methods running and whatever. But I actually find that rather useless now as a starting point. And the first thing I actually do is just get a histogram over time of your response times. Because then with this issue, what we saw was that. And we could actually kind of note and go that there is a five minute cycle. And as soon as we identified five minutes, we went, you know what, it has to do with the search indexes. And as soon as the network guy went and saw that, he went, you know what, that's TCP. And at that point, it's fairly specific. We only had one TCP related search thing. That would have saved me four weeks if I started at that diagram. So getting into there and just looking for the, there has to be a pattern to it. If not, if you're in IS 7, it's got the failed request tracing stuff is really good. You can actually go and set a threshold in there of saying if this page takes longer than three seconds to render, go and just dump all the information about it. And that'll tell you exactly which modules are loaded and everything where it's at in the pipeline. So you'd at least be able to tell, is it in the request processing or is it in a pre module or post module or something. And then after that, you basically get into one of the things we did try and do on this. When we had this problem, we weren't very effective in making happen. We was actually having a script that monitored our performance and as soon as it got slow in one of these periods, it took a snapshot every 30 seconds with WinDebug. That had a production impact because we'd lock up the worker process while we wrote the dump to disk, except we had a production impact anyway and the site was slow. That's why the script was triggering. But when we were doing that, after the fact, we were able to find all the relevant information back in those dumps, but we just drowned ourselves in information, unfortunately. Cool. Okay. Thanks for coming along.
A tiny subset of your users can’t login: they get no error message yet have both cookies and JavaScript enabled. They’ve phoned up to report the problem and aren’t capable of getting a Fiddler trace. You’re serving a million hits a day. How do you trace their requests and determine the problem without drowning in logs? Marketing have requested that the new site section your team has built goes live at the same time as a radio campaign kicks off. This needs to happen simultaneously across all 40 front-end web servers, and you don’t want to break your regular deployment cadence while the campaign gets perpetually delayed. How do you do it? Users are experiencing 500 errors for a few underlying reasons, some with workarounds and some without. The customer service call centre need to be able to rapidly triage incoming calls and provide the appropriate workaround where possible, without displaying sensitive exception detail to end users or requiring synchronous logging. At the same time, your team needs to prioritize which bugs to fix first. What’s the right balance of logging, error numbers and correlations ids? These are all real scenarios that Tatham Oddie and his fellow consultants have solved on large scale, public websites. The lessons though are applicable to websites of all sizes and audiences. The talk will be divided between each of the three scenarios described here, and then stuffed full of other tidbits of information throughout.
10.5446/52160 (DOI)
Hello, everyone. So is anyone familiar with Johnny 5? Yes. So the idea is now that we have the embedded hole, like Jonathan explained this morning with Monotron. So if someone writes drivers for a particular device, let's say an accelerometer or perhaps a temperature sensor. The idea is leveraging all of that and making a framework similar to Johnny 5. So we could make it very interesting to get started with rust on embedded devices like microcontrollers, ARM Cortex, Arduino. Anyone uses Arduino? So for Arduino, we could have a different one. So the rust code will run directly on the Arduino itself. It will run on your computer. It's based on a protocol called Formata. Anyone use it? No. So yes. And if someone wants to help with this, perhaps, have you any, you can get started with embedded hole. So the rust embedded group has a very good guide about this. You can write some drivers. And I'm hoping to make a very standard API. So there are different types of accelerometers. You could just have a standard way of getting the data from all of that independent of the drivers itself. And that's how it would work. Thank you. Thank you.
This lightning talk proposes developing a framework similar to Johnny-Five to make using Rust for Robotics and IoT easier. Leveraging the embedded-hal ecosystem, we could develop a framework that can run on microcontrollers like ARM M0 cortex to single board computers like Raspberry Pi, all using embedded-hal compatible platform agnostic drivers.
10.5446/52162 (DOI)
Gwyddon, Lon mynd a gyll bajaeth, tarad yn f��af i distributed y cartwch mewn sarnor ac mewn beth nawr pweredwch beth aggiwyd gyda ni'n ei angen. Maen nhw'n gawr i fi Cae, mae'r MYth unig yn yn eich Leo ac maen nhw'n!!!!! rydych chi'n cael trwy hyn scholars i dweud eich ridech. Yлwynllwr easyll felly fawr yn Llad George FMR 80. Er bod y dŵr yn ceisio i pan hynny yw pan rahyddo catalion, fydd Holden 1983 yn dau bod cael euhell i altitude. Ar criff differences i profession hwn hyn yn Cainbyrgynblyni韜wn... Run rhai rhaid o'r wil i ni hyd wedi cy biased … Ar y linequatau arlo barth eatinghael.... Yn rhai rhaid rhaid iddal rang wrong yr healtho..... traf y N sauce yma chi, Roedd gynllen y gyngan yug am rahn Felly arnym ni ell drawing washl welcome i'r War15 i clan ein sidd rides ymическая wnes i bawb i anxio C pan greiielef am C++ ei hyn o maersod fan a ein rai I wedi du i fy llيف. Gall unpack bydd hynny, dyma. Fi fan hynny'n youch, log i, bod wedi nôl yn rhan ал mae'ch badech gyda'r hip o'r grfr A dw i'w detro i'ch bydd ddiwrnIGawrwn, gennych Er ju mae'r bod estánion o gag bugs, sut rydai i gych bydd ayna nhw a i wedi arboll gwno i gyd, i gych gherthu rhai Synodol i gITCHUB reported arcy li Dsylhad ofe brymu byna OMG a weom therealf dŵivaeth gyda trys [? i wyliau centuries?] iddyn nhw mewn� ה�od으면서 pleidwyr. Po'n'nwall favourite to strain facell y profi reducing fe fai fчик pan wedi bod ni'n p dôl i de επfer y gwnewch rai chi roi. Speckfor hyn hynny, i ddweudial yn cais諷. Rydw i wedi'i profiad, ydym yn bo 사실 ribsigau, ein domgyng gan Llyfrgell mommy loading. Ac ti'n f чennasmu ar hyd y computyr a fe allan o an Australia. Mae wally oedd blastachonon, mae'r ddyn gydeóu i'r enlarod. A r conflu ein rhoi yn gyd. A rheswm gallwch yn snod fkerfod y bywydant bakeil yn el lan. Oeddaeth y byd ansedd, r Speaking fantastic, i ridw ar YouTube. I realized the other day I didn't watch broadcast TV anymore, finally arrived in the 21st century. So nostalgia nerd, 8 bit guy, it's his t-shirt, Retro Man Cave, Lazy Game Reviews, all brilliant, you should check them out. And so this project, I wanted to set myself a challenge. I wanted to do something that I don't think had been done before, certainly something that I hadn't done before. And I wanted to know how much can you get from very little. I'm a firm believer in, I think it was Stephen Fry, said that there is beauty to be found in constraints. It's the difference between landscape gardening, 300 acres of park and a small urban backyard plot. That can be incredibly beautiful because it is constrained, because you are limited in what you can do in those few square yards. It's the same with writing a haiku versus writing, you know, prose, writing a novel. There is beauty to be found in what you can extract from the very little. That's how I feel about embedded systems. So what have I done with my time? So I wanted to make a system that's a bit like these computers in the 1980s. I wanted it to be super cheap, don't have a lot of money, just wanted to use some bits I had lying around. I wanted to do it in Rust. Pretty much means I had to use a processor from ARM, one of their Cortex-M series microcontrollers, because they have the best support in Rust. They are not the only ones. There are some other platforms that are getting support as well, but Cortex-M is kind of where it's at. I wanted it to be kind of basic. I wanted it to be constrained so I could challenge myself to see what I could get out of it. I didn't want the problem to come solved. I didn't want to just get a system, turn it on, load the demo code and go, no, I'm done. There's no fun in that. I wanted a raw bag of bits that I had to put together and do something with. So the first board I looked at was this one from SD Micro, the F7 Discovery. It's a great board. A megabyte of flash. To some of you, that might sound pretty constrained compared to your terabyte laptop. For me, it sounds like, it's quite a lot really, 340k SRAM sounds like loads from my 40 view. 216 MHz, again, depends on your point of view. I think that's pretty fast actually, 32-bit embedded CPU. But this thing's got a proper LCD interface. If we're going to try and connect to a TV or a monitor to do some graphics, having hardware support to drive a monitor or screen, no, come on. Where's the fun in that? It's even got audio and ethernet and everything. This is what it looks like. If you come to the screen, you turn it on and it does that. That is problem solved. There's no fun in this. So I went for a different board. The Texas Instruments Stellaris Launchpad. It used to be known as Stellaris. That was a brand TI I had bought. They scrapped that branding and relabeled it the TivaC Launchpad. TivaC Launchpads and Stellaris Launchpads are basically the same. It's a minor tweaks. I already had a couple of these boards. I've done some drivers for them. I've written a couple of crates and crates.io. If you get one of these boards, there's good support. You can use the drivers I've used to get you going. 256K flash, 32K of SRAM. Now there, that's a challenge. That is not really enough SRAM to store a whole bitmap. Never mind a coloured bitmap. You'll see how we solved that one. Cortex-M4F 80MHz. Not too fast, not too shabby. I only got a few peripherals. Most importantly, there's no VGA support in the chip. No graphic support and no external bus. There's no way of dropping down a video chip like we did in the 1980s. I'm going to have to do this. This is what the board looks like. These are super cute because you get this chip, which is the one you're allowed to program. This is another copy of pretty much the same chip. It comes pre-programmed by TI, and that's your JTAG programmer and USB serial converter. You just plug this into your USB on the computer. You can run an open source program called OpenOCD. You can flash the board. You can get serial comms in and out. There's nothing else you need to buy to use this board. To buy the board, plug it in. You can program it. You can use it. You can do that in Rust. You get a couple of buttons and this is a red, green, blue LED. You can blink an LED and that's embedded hello world. If you can make an LED turn on and off, you're all good. This is what the chip looks like internally. The way these boards work, these microcontrollers work, they're a lot like a whole computer from the 1980s, but shrunk down to a single piece of silicon. There is a processor core that can read the instructions and do the maths and move the variables around in memory. There is some memory, some RAM, some ROM, and there's a whole bunch of peripherals in these simple computers. These would have been separate chips all connected on some kind of bus, the 8 or 16 parallel wires you see snaking across the circuit board. Here they're all baked in. We've got some timers. We've got some synchronous serial devices which help us push out pixels. We've got a serial port and it runs at 66 or 80 MHz. That's quite important because the clock loop system on this is fixed and it will either run at 80 or 66. It won't do 72 or 70 or any number in between. It's literally 80 or 66. How do you get started doing embedded Rust? You need to drive the hardware and in C that means you need a bunch of numbers you take from the datasheet. We have a mechanism in Rust where we can auto generate the code to access the chip. The chip manufacturer provides an XML file. Not all of them do, some do. It basically describes the registers. It says there is a serial port here. The serial port has a register at this address. Those 32 bits are split into these 8 bits mean this, these 4 bits mean that, these 12 bits mean this. If you have that machine description you can auto generate this code. Here we are saying in the runtime clock control register we are on the AHB bus enable register and we're going to modify that register. Then this modify function takes a closure. It's embedded Rust but I can still use closures. That works perfectly well. This closure takes a read and a write and I'm going to use the write object to set the IOP enable bit. That might look like nonsense but if you read the datasheet these words IOP, AEN and AHB, ENR they all line up with the datasheet. If you've got the datasheet open the code you write makes sense compared to what you're reading. It's impossible to get this wrong. You can't accidentally set the wrong bit because you mistyped a 6 for a 7 like you can when you do this in C. You can't accidentally write a 16 bit value to an 8 bit field. It won't let you. It says no that doesn't fit. If you've got a field which is an enumeration and there's like three choices and the fourth option is invalid well you can only give it the three valid ones and you can't give it the invalid one. So there's loads of power in this and then when it compiles this disappears and becomes a single register write. So it's as efficient as doing the hand optimised C where you've written out all the literal integers and done all the bit shifting. But if you've got a datasheet and you understand the chip this I think is easier to understand what's going on. So if you listen to the new Rustation podcast you know they do crates you should know. Here are my crates you should know for embedded development. SVD to Rust and the crates that are generated from that have a look on crates.io and you'll see crates for various different embedded chips. That's your sort of low level access to your chip. We then have an embedded howl a hardware abstraction layer. We're trying to find ways of saying this is a serial port. You can read data. You can write data. Now I don't care if you've got a TI serial port or an ST micro serial port or an NXP serial port. We'll try and write drivers that make all of these serial ports look the same and then we have some code portability. So we have these howl crates that dress up the chip to look standard, make them all look similar. So we can write some portable code. And then for Cortex-M there's a couple of crates that help you get the chip booted. It can be a little bit fiddly but it's great because Jorge's done some great work and you just externalize those crates and your chip will start up and you can do interrupt vectors. For most of this year I've had to use nightly Rust. It's a little bit like gambling. I go Rust update. You roll the dice. Snake eyes. Sorry, sucks to be you. Thumb V7's broken again. We are going to be on stable embedded Rust by the end of the year. We've got a couple of blocking issues. So to go with that we are also writing three separate books. One is an introduction to embedded systems, one is an introduction to embedded Rust and one is all of the deep, weird compiler voodoo required to make embedded systems work. You can find them all on our GitHub. We love pull requests. We love reviews. So please check out the books and help us out with that. By 2018 release you will be able to do embedded software on your Cortex-M using 2018 syntax if you wish and using the stable compiler. I've done a bunch of tests on beta. If you're not testing on beta please test on beta. Let's find all these bugs before we go stable. Video. What I want to do is connect this to a monitor. How do we connect things to a monitor? Well, VGA is the simple way to do it. HDMI is insanely complicated. VGA video is an analogue system with three colours. Red, green and blue and the colours you see are some mix of red, green and blue. This is a representation of a line. If you imagine a laser pointer or the cathode ray in your CRT, it starts on the left side of the screen and it swings across, drawing pixels in a line and then it flies back to the left and comes over again. It's swinging back left and right. As it swings across there's a bit where it's off screen. That's the synchronisation period. Where you've got the red, green and blue lumps, that's the portion where the picture is. You have these lines and then lines make up a frame. Again, some of the frames are invisible. They're off the top of the screen or off the bottom depending on your point of view. Some of them are on the screen. That turns out to be really useful. While the beam is going across the screen in the visible portion, my CPU is going to be pretty busy doing pixels. While the beam is off the bottom of the screen in this invisible area, I've got no pixels to draw so my CPU can go and do something else. I should mention, VGA is an analogue system. You are supposed to have analogue operational amplifiers and a 75-ohm source impedance. Turns out that's completely optional. You can just connect an HDMI splitter and a hop-hog video recorder into a projector using three resistors on a 3.3 volt TTL output. We'll probably get a picture or a fire. We are going to have a great afternoon. This system, VGA is all about timing. If the timing falls apart, if you do not do things when the monitor is expecting you to do them, it says, out of sync. I have a lot of experience with monitors saying, out of sync. That means the timing was not as expected and the picture stopped. If you're slightly in sync, what you get is a wobbly picture or a fuzzy picture or a picture where all the colours are all smeared out. If you go to tinyvga.com, it's a brilliant website, you can see the timing is required for a whole bunch of different standard video modes as used on IBM PCs. For example, the basic video mode from Windows 3.1, the standard VGA mode, 640 x 480 pixels, 640 across, 480 lines, 60 hertz. You need to emit a new pixel at 25.175 MHz. That is exactly how often the pixels need to come out, otherwise it's not going to work. A little margin for error. You remember my chip is clocked at 80 or 66. That is not an integer factor. The BIOS mode, so text mode, when you boot your computer, not actually a VGA resolution, it's 720 x 400, the classic DOS 80 x 25 text mode, 28.322 MHz. No, that's not going to work either. Super VGA, the next step up, 800 x 600 pixels at 60 hertz. 40.000 MHz is the pixel clock rate. Yes, that is exactly half of 80. I think we have a winner. The problem is 800 pixels x 600 pixels is a lot of pixels. I haven't got enough SRAM to store all those. If you take 800 x 600, multiply them and have one bit per pixel, it doesn't fit in RAM. How are we going to fix this? The first thing we do, we cheat. If you halve the pixel rate from 40 to 20 MHz, you get yourself a 400 x 600 display. It's a little bit smaller. You can just about squeeze it in. It's also less CPU required because each pixel is twice the size. We split the screen into text cells. Like characters in a DOS screen, 8 x 16 pixels for each cell. What you can do is put the characters in memory and then you can convert the characters to bitmaps in real time as the scan line is running across the screen. The pixels on screen are not stored in memory at any point. Only the characters are stored and the characters are rendered with the font. I'm not using true type fonts, just a basic bitmap font. I'm rendering Bezier curves on a course XM. We put a little bit of a border on, 12 pixels and 8. It makes things simpler. It gives the monitor something to lock on to. If you're doing homemade VGA and you've got a little image in the middle of the screen, it's hard for the monitor to know where the edges are. The border is a bit of a trick to help the monitor line up. How are we going to do this in software? We need to do it in interrupts. Your code is running, an interrupt occurs and your CPU switches and does something else. It turns out it's actually a non-constant amount of time elapses between the interrupt and your code starting to run, which gives you wobbly video because the start of your scan lines then don't all line up. We have a timer and when the timer goes off we start our routine. The way this works, I use the SPI, the Synchronous Serial Peripheral, and it's got a fifo in it. It's a bit like a bathtub. When the first timer fires, the far left of this diagram, I start generating the pixels that need to be displayed on screen by looking up the characters and rendering them, and I start filling the bathtub. Then when the second timer fires precisely here, I pull the plug out of the bath and it starts draining and the pixels are displayed across the screen. In theory, if I'm filling the bath sufficiently fast, it will never be empty at any point. If it goes empty, there is a black line in the screen. That's happened a lot. I'm doing as much work as I can, so ideally the bath just runs out at the end of the scan line. It all works. This is the fundamental number. I have 32 clock cycles for every character on screen. A character is 8 bits across, that's a byte. 32 clock cycles. To go into the text RAM, find the character, the letter A. Look up A in the font. There's 256 entries in the font. Find out which bits I need for a letter A at that position in the A. There are 16 rows in each character. I then have to go and, if I'm doing attributes like underline or bold or maybe colours, I've got to do a whole bunch of maths, work out what I'm doing there, and then get those pixels chucked into the SBI peripheral. 32 cycles for each byte, 48, 50 bytes across the screen. There's 37,000 lines per second, and it has to do that every single time. Any one of those goes wrong, it's not going to work. It's worth saying I have had some fun with the optimiser recently. I was in a situation where I was editing some code and I'd rebuild it, and my scanline routines would not run fast enough and my monitor wouldn't sync. I edited some completely unrelated code and built it again, and the optimiser did things in a slightly different order, and my picture was fine. I'd edit something else and then it would all fall apart again. It was incredibly frustrating. I think I managed to take some maths out of the loop and pre-compute some stuff, and I think we've got a solid picture, but I'm kind of pushing the limits as to what you can do in Rust. So, enough talking, demo time. APPLAUSE So, I'll get my Bluetooth keyboard to work. May I just have to use this keyboard? There we go. So, there's a built-in menu system. This is Rust, so it's all divided into crates. There's a crate that does the video driver, there's a crate for the application. This menu system is a crate. I told my GitHub it's called menu, and it allows you to plug in static commands, and it generates the help text, and you can go in and do stuff. So, we can do a dump, so this will show you the contents of memory. So, if you want to see what an ARM interrupt vector table looks like, this is an ARM interrupt vector table. It's at address 0. This is 0.0.0.0.0.7.8D. These are all the crash interrupt vector, because I'm not using these interrupts. So, if someone calls them, that's bad, and it crashes. But some of these are different numbers, and one of these will be the video rendering routine, and so on. I've got a couple of demos to show you. So, a couple of these demos are baked into ROM. We can show you what the data is going to look like. So, we'll go to the data. It's baked into ROM. We can show you what the font looks like. Before I stay in range of my Bluetooth keyboard. Alphabet. So, this is the MS-DOS copage 850. Felt like it was a balance between accented characters and some of these graphical blocks. I could have done 437 for you Americans, but you took all the accented characters out and just had more block characters, because who needs international characters? The copage 850 is a sort of a nice thing. So, we've got this text mode, and as I said, these are being rendered in real time as the beam goes across the screen. There is a significant amount of processing happen to make this picture appear. The colour, for example, is interesting. There are three SPI devices. One for red, one for green, one for blue. If you start them in the obvious way, start red, start green, start blue, they don't start at the same time. One of them has to start before the other. That means on screen, the red, the green and the blue don't line up. If you draw a white thing, it has a red edge and a yellow edge, and then it's white, and then it's purple, and then it's blue, because none of them are in phase. The trick, it turns out, is some in-line assembler and a lot of no-ops. You actually start red two bytes early, and then do a bunch of no-ops, and then you start green one byte early, and then do a bunch of no-ops, and then start blue, and they remain in phase across the screen. That took about two months to work out. You're welcome. I'll let you have that one for free. So, the sort of things you can do on this system. You could play space invaders. I've got a text-based display, I've got some double-height text. I could write this game if I wanted. You could do Pac-Man. It's not great, because I haven't got a Pac-Man graphic. But that will work. MS-DOS Productivity Software. People ran their lives on this kind of thing. That's fine. That would work. You could run your life on a computer. You'd made yourself. You could do artwork, kind of blocky, because we're in text mode. I have no artistic merit. I'm happy to admit that. I was using an Asciar editor called Rex Paint, drawing things. I'd drawn landscapes and trees and houses, the sort of stuff my four-year-old can manage. The only thing I could think to draw was a picture of the Asciar editor. This is a photo of the Asciar editor I drew in an Asciar editor. But you get the idea. You can select your colours, and you can paste the characters from the font. I have no artistic merit, so I got in touch with a renowned eight-bit artist. His name is Horsenberger, and he did the art for teletext. If you remember that video, that text service you could get on your analogue TV, told you the weather and the news. In the UK it's very popular, and there was a lot of artwork on there. Horsenberger had done a lot of this artwork, and he's done us a special teletext artwork for Rustfest. So there is Pixelfaris. Technically, Faris is not made of pixels. He's made of sixles. Teletext is this weird thing. What we've done here is we've switched the font. I've now loaded a different font into the system, and you've got six individual pixels, or sixles, within a block. There's two, and then three. Each of these is actually a character, but it has a different number of the little blocks turned on. Two to the power of six is 64, so I've got 64 block characters, which let me do this. He sent it to me in teletext format. He gave me a thousand bytes of data, and I had to go and get the teletext specs and understand all the control codes. This is actually a thousand bytes of genuine teletext data baked into the ROM, decode it at runtime, and convert all the control codes, and map all the stuff around. Which made me realise I can just go to the internet and download teletext data. This is a genuine page from 1983 that someone accidentally captured on their VHS recorder, and has since digitised and uploaded to the internet to the teletext archive. Seriously, this is genuine teletext information from 1983. I think the system is fine. I could use it. Imagine a Twitter client that looked like that. You'd be absolutely fine. Just to demonstrate, because these have all been a bit static, there is some extra CPU time. How about graphics? I see some ooze. What we've done here is we've used 17 kilobytes of video memory to do a black and white bitmap. APPLAUSE We've overlaid that bitmap on top of the text background. These are all text cells. Instead of rendering font, I'm pulling the graphics from the bitmap and then basically attaching it at a different scan line on the screen. I don't have to move anything in memory to slide it around, because I'm not sure the chip's that fast. This is basically a cheat, but it looks kind of good. There's enough CPU time to do the flame animation. APPLAUSE So far, this is all kind of similar to what we had at Rustfest Paris, but with the tele-tech stuff. I understand that's not going to be enough for you people, and you would like to see something new and something different. How would you like to see a program written in C, compiled for ARM, injected over the serial port, and then executed from RAM? If I get out of the demo menu, because I can't do it while in the demo menu. There we go. There is a C program that has been injected over the serial port and into RAM, and then that runs just fine. OK, that's all right. How about some more interesting C programs? How about a copy of Dr. Dobb's Tiny Basic, originally for the 68000? Welcome to a basic interpreter, so I can say 10 print, hello, 20, go to 10 run. Yeah, woo, you can do basic. APPLAUSE Right, so let's go back in here. Let's exit out of that. I can tell you're a tough crowd and you're still not impressed, because Tiny Basic is only an integer basic and there's no floating point support. So wouldn't it be better if we used a proper basic, a Microsoft basic? How would you like to see Microsoft enhanced basic for the 6502 running in a 6502 emulator? LAUGHTER That is genuinely Microsoft code for the... for something like the Altar. I can't remember which machine this is from, but this is genuinely Microsoft basic. The reason we can tell it's different is that all the words have to be in capitals. LAUGHTER So there we go. It's not as fast, but that is a 6502 CPU emulator. APPLAUSE So I've got one last demo. I have to reboot the board to get out of that. All this basic stuff is fine and people normally use basic to write games. So how about a copy of Snake? I've got my Atari 2600 joystick. LAUGHTER But what you really need is some background music. So how about a three-channel wavetable synthesiser that executes at 37 kHz at the end of the scan-light interrupt? LAUGHTER So I don't know how well that's coming out on the microphone, but there is a three-channel synthesiser running, and then we can... If we grab the microphone, we might be able to... LAUGHTER APPLAUSE So I don't know if I mentioned, but I have no artistic merit. This is the most annoying tune I've ever heard, and I have heard it a lot. So that's the end of my talk. So what I'd like you to do now is watch me have a little gamer's snake, whoop and shear every time I get an apple, and then when I eventually crash into the wall, you may all go into Wild Rapture's applause. Are you ready? Here we go! Yeah, come on! CHEERING Oh, it's tricky. CHEERING No, I missed it! No! No, come on! CHEERING APPLAUSE Thank you very much!
I missed the simplicity of of computers like the C64 and the Apple II and I wondered if I could recreate something like that, but using the Cortex M4 devboard on my desk and a handful of resistors. Can you generate VGA without a video chip? Can you render text without enough RAM for a framebuffer? Can you read from a PS/2 keyboard? Can you get any audio out of it, while doing video at the same time? Can you do it all in Rust, and run tests on an actual PC? Will it run fast enough to be useful?
10.5446/52163 (DOI)
Thank you for that. All right, so I'm going to try to go through this quickly for the sake of not running over time. A little bit about me first. I work at security at Google. I get to work on open source Rust, which is really cool. If that sounds like something you would like to do, Rust or security or both, come talk to me. And I just started the secure code working group for Rust. If you're interested in that, also come check us out. The thing we're going to be talking about. So the thing is an experimental networking stack handles the link layer through the transport layer. It's like Ethernet through TCP and UDP. It is written in pure Rust, obviously. Design goals are for low resource environment, so low CPU utilization, low binary footprint. And finally, it is structured as a platform agnostic core and then some platform specific bindings. So if you are interested in using this thing for your kernel or your project or whatever, just let me know. It's obviously very early APIs are changing, but if it's just sort of experimental, that could be cool. A quick note on the subtitle that was advertised. High performance. This networking stack can currently respond to pings. That is all it can do. Yes, thank you. Yeah. So yeah, so when I say high performance, I'm not referring to actual benchmarks. I'm referring more to design to the fact that we can prove statically that a bunch of things are happening at compile time rather than runtime. I suspect that those will lead to actual good performance later, but I don't know. So we'll find out. So brief outline for the talk. First we're going to talk about design goals for the stack and the stuff we're going to be talking about. Give a little bit of background on packets and how they're laid out just to get everyone on the same page. First we're going to talk about parsing packets. Then we're going to talk about serializing packets. Then we're going to talk about forwarding, which is basically parsing and then serializing packets. Right, yeah. It's very fancy. And then finally we're going to talk about some nitty gritty details of how we do a zero copy in a safe way. So the goals for this design. So first of all, zero copying. When I say zero copying, I don't literally mean no memory reads or writes, obviously, but if we have a big buffer and there is a packet living in the buffer and we want to operate on it, so we want to parse it or modify it or serialize a new packet into the buffer, we're not going to pull it out into some scratch space and operate on it and then move it back. We're just going to do everything directly in place. As I said, we expect this to make it high performance. We'll find out. The second thing is zero heap allocation, so everything's on the stack. And then finally zero unsafe. So no use of the unsafe keyword. This may sound really cool and too good to be true. That's because it is. There's actually a little bit of unsafe. But in standard Rust fashion, that unsafe is self-contained in nice little small packages and it can be nicely composed. Another thing I want to clarify here, this is not a look at the cool things you can do in Rust Talk. This is a look at the cool things you can do safely in Rust Talk. Everything that we're going to describe here today can be done and is done in C and C++. The only difference there is it's very unsafe and you, the programmer, have to verify a lot of properties yourself. So the distinction here is not like we can do this thing, but rather we can do this thing differently. Quick note on terminology. When I say the word packet, I am aware for the patents in the audience that there are things like Ethernet frames and TCP segments. If you read the code base, you will find these things. But for the sake of simplicity, everything is a packet. I'm terribly sorry. So a little bit of background on how network packets are laid out. So packets are just a big sequence of bytes. All these diagrams that you're going to see today are beginning byte on the left, end byte on the right. Packets are recursive. So the analogy that we're going to be using throughout the entire talk, I hope you like onions. We're going to be talking a lot about onions. So basically you've got a packet which contains a packet which contains another packet which contains another packet and so on and so forth. The packet that we're going to be talking about are just a header and a body. I'm sorry for those of you who really like packet formats, no packets with footers. So there's a format header, there's a header at the beginning whose format is fixed. As in if you go and read the Ethernet spec, for example, you will see there is this field and then this field and blah, blah, blah. So the header has a well-known format that we can parse and then the body is completely unstructured and variable-length. We have no idea what's in there. It's completely opaque. Now once we know what is in there, that might be a different format which we can go parse. So for example, we might discover that, oh look, there's an IP packet in there. And then we can go pull out the IP spec and we can read it and say, oh, this is how you parse an IP packet. But from the perspective of Ethernet, it's completely opaque. So let's say that we have this IP packet here. We might say, well, we're going to parse this again, again, an IP packet just in the same way, there's a format header at the beginning, we know what that format is, and then there's an unstructured body, maybe this contains a UDP packet, and so on and so forth, we can keep going like this. An important thing to keep in mind here is that when we're parsing, we don't actually know ahead of time what all of these types are going to be. So we might get an Ethernet packet and we don't know what's inside of it. It might be an ARP packet, it might be an IP packet. We actually have to look at the header. The header tells us what's inside, so we actually don't know until runtime what the next thing that we're going to parse is. So this is unfortunately, it has to be less static than you might like. So let's talk very briefly about what our end goal is. If you recall, there are four parts, parsing, serialization, and then forwarding is our third part. We're going to build up to forwarding at the end, and I want to describe very briefly what that's going to look like, just to keep in mind what we're building towards. So in forwarding, we are going to allocate a buffer on the stack, we're going to receive an Ethernet packet into it. We're going to parse the Ethernet packet, we're going to parse the IP packet inside of the Ethernet packet, then we're going to decide to forward the packet. So we're actually just going to resend it to somebody else. And first we're going to update the header, we have to do some updates when we forward packets, and then we're going to serialize it inside of a new Ethernet packet, and all of that is going to happen in place on the stack, no copying. So that's the goal that we're building up to, but first we have to get through some other things, namely parsing. So the goal of this section is a lot simpler. We are going to allocate a buffer on the stack, and we're going to receive an Ethernet frame into that buffer. We're going to parse the packet, we're going to parse the packet, we're going to parse the packet, anyway, this is the, you get the idea. Very simple example, we just receive it and parse, parse, parse, parse, parse, that's the whole example for this section. So all right, the buffer trait. The buffer trait is a really important trait in the net stack. It is sort of like the core trait for parsing and serialization. The reason that it's a trait and not a concrete type is that we have a bunch of implementations of it, but the details there aren't important. A really important thing to keep in mind with buffers is that they are referencing. And by that I mean that when you say you have ownership of a buffer object, that's just a tiny little struct thing with some pointers, right? Maybe it's an own point, maybe it's owned, maybe it's a reference to something, but the point is that thing is tiny, the buffer bites themselves live somewhere else. So we're going to talk about like moving these things around a lot, keep in mind that that's a very cheap operation because this is basically just a pointer. So these things are basically just a sequence of bytes, just like a pack would be. You might ask the question reasonably, why is this not just a slice? Well, the answer is that we need to keep track of how much we've parsed so far or how much we have serialized. So for example, sorry, so the way that we do this is we split the buffer into a prefix and a body. The prefix is basically everything that I have not parsed or serialized so far, or sorry, everything that I have parsed or not serialized, and then the body is everything else. So it's essentially just a pointer into the buffer, right? Here's the whole buffer and here's where I am so far, right? So you can consume bytes from the body and they get added to the prefix, so I can shift this way. You can consume bytes from the prefix and they get added to the body, shift that way. And the invariant that we're trying to maintain here, and this is going to be really important through the whole thing, is that the body always contains, when you are parsing, the stuff that you haven't parsed yet. So if I have an ethernet packet and I chump the ethernet header off the beginning, then I have the body left, and then when I pick up where I left off, I call the parse method again, I'm naturally parsing the next most encapsulated thing. And then when I'm serializing, the exact inverse happens. The current thing in the body is everything that I have serialized so far, and when I ask, please serialize your header, it starts where we left off and goes this way and expands the body to include the now larger packet. So what is in the body is really important, where that body is versus the prefix, that's going to be sort of the core of everything here. So let's walk through an example. So this is a diagram here. On the bottom, we have the actual physical structure of the packet, or sorry, of the buffer, excuse me. So it starts off as being all body. On the top, we have the logical view. So logically, the contents of these bytes are an ethernet header and an ethernet body, but we don't know that yet. And so when we parse it, it's going to look a little something like this. So what we have here is the result of parsing is an ethernet packet object. An ethernet packet object is a tiny little struct that just has some references into the buffer. Again, this is a zero copy theme here. The ethernet packet object is just references. It has one reference into the header and then one reference into the body. An important thing to keep in mind here is that the header, so the body is just bytes, we don't know what it is, so that's just byte slice. The header, on the other hand, is actually structured. So we'll get into at the end of the talk how we actually achieve that without unsafe, but you should basically think of this as a struct reference. So we have a struct. It represents what the header actually looks like. We've got this field and then that field and so on and so forth. And this is just a reference into that struct, so we can just access it like plain vanilla rust. So we've got that struct.field, blah, blah, blah, blah. Last thing to note is because these are references, the ethernet packet object borrows the buffer. So if we want to actually move the buffer around, which we will want to do, we have to drop the packet first. So a pattern that you're going to see over and over again is first we parse, then we operate on that actual object, and then before we do anything else, we drop it. Then we pick up where we left off, but now we've forgotten about that thing. Okay, so as we said before, we've got this and then we're going to drop it. It goes away. And so what we're left with is that the body of the buffer now corresponds to the body of the ethernet packet. So as I mentioned before, we can just pick up where we left off and parse the next packet. So the exact same thing happens this time with the IP packet. We have a header. It references into the buffer and it is a structured reference to the actual, you know, it's a struct that we can access the fields. And then an unstructured reference to the body, it borrows the buffer and so when we're done with it, we have to drop it. Let's look at a little example of what this might look like in code. This is obviously a little pseudo-codesh because they're, you know, more like I would want to do more things if I receive an ethernet packet, but this really is the core of it. The buffer has a parse method and you literally say, here is a type of a packet, please parse it for me, done. And so you get a packet object back here and then you might operate on it and then you drop it and then you pass that buffer. Again, as I said, you pass the buffer by value. You're actually passing ownership. This will become important later. So you pass the buffer by ownership to the next layer of the stack. The body is left as it was before, so the next layer of the stack just picks up right where we left off. So let's actually look at what this might look like in memory on the stack so we get a sense of how all the memory is laid out. So let's say that we receive a, we have a buffer that is allocated on the stack, so it's in the stack from there. First we pass the buffer into receive ethernet packet. So this little black arrow here just means that the actual object is here but the actual bytes of the buffer are still in the original stack frame. First we parse the ethernet packet, just like we mentioned before. So the ethernet packet struct itself, which is this tiny little thing, it lives in this top stack frame here, but the references are all into the contents which live in the original stack frame, right? No copying. So we drop it and then we send the buffer again into the receive IP packet function this time. And again, we parse this time an IP packet. And the references, the tiny little struct lives in that stack frame and the references point into the bottom stack frame. So that's parsing. It's very straightforward. Let's get into serialization. So the goal here is that we are going to receive a request to send an IP, a UDP packet. So there is an application that says, you know, it's like a DNS client or something. It says here are the contents of a UDP packet that I would like you to send out onto the network. So we're going to compute some header information for the UDP packet. Then we're going to compute some routing information for IP, where are we sending this thing, and then some information about what the IP header will need to have. Then some ethernet routing and the header information. And then finally, after we've done all of this, only then do we actually compute the length and allocate the buffer. So all of the computation before, we haven't actually been doing anything. We've not been serializing. We've not been doing any kind of allocation. At the bottom, do we actually say, OK, now that we know how big the packet needs to be, now we can allocate the buffer? I know that I said stack allocation at the beginning. We're going to heap allocate in this example just to keep things simple. We'll actually stack allocate in the end. And then finally, once we have a buffer, then we can just serialize in one big pass UDP, UDPBotter, UDPHeader, IPHeader, ethernetHeader, and then we're done. So a little note on why this is hard. Some of you in the audience may be thinking, OK, well, if you know how big the packet is, why don't you just allocate it ahead of time? Why do you have to do all of this sort of weird stuff? And the answer is that you need to send the request to the next layer before you serialize. So for example, if I have a UDP body and I want to say, well, I need to know what buffer I'm going to serialize it into, I have to know, obviously, how big all of the headers are going to be so I know how big the buffer needs to be. But the IP layer needs to know where it's sending the thing in order to know which link layer protocol is being used. Am I sending it over an ethernet network that has a header of this size? Am I sending it over a Wi-Fi network that has a larger header? So you need to actually compute all the routing information before you know how big the buffer needs to be. And so there's this chicken and egg problem where in order to call the function that says, please send this thing for me, it cannot be serialized already. But of course, you can't get control flow back because the thing needs to be sent by the time you send you call the function. And so that's why this is difficult, and we'll see how we solve this problem. So the first trait that we're going to introduce to deal with parsing is the packet builder trait. Remember before that I said that all packets are onions, right? And they are just sort of a series of layers. The packet builder is like a single layer of the onion. Nothing inside of it, nothing outside of it, just one sort of shell, right? And it assumes that we already have a buffer allocated that already has enough prefix space to serialize everything. We'll solve that problem later. And what it does is it serializes itself into the buffer. So you give it a buffer like this. So here's a buffer that has a body. And you say, dear packet builder, so in this case an ethernet packet builder, please serialize yourself right here, right? We already gave you the body. We gave you enough space, just write yourself in, right? And so this contains all the metadata needed to figure out what that header needs to look like. Source address, destination address, so on and so forth. So that handles just a layer of the onion. Let's talk about the whole onion itself. So the serializer trait, on the other hand, represents the entire onion. So it represents a single layer and everything below it. And it's recursive. So if you take a serializer, which represents the whole onion, and you add a packet builder, which represents the next layer of the onion, you get a bigger onion. And this is how we're going to construct all of our serializers. Another important thing to note here, a serializer does not actually represent a packet directly. You can sort of think of it like a packet future. It describes all of the information necessary to figure out how to allocate and serialize a packet in the future later. Let's walk through a very simple example. So this is a sort of silly straw man example of a function that might construct a serializer. It takes a serializer in and then wraps it in IP and wraps that in Ethernet and then returns it. So we start off with a serializer called SIR. We encapsulate it in a new IP packet builder. Remember that builder is then the IP layer of the onion and we get a new larger onion back, right, a new larger serializer. We encapsulate it again by adding yet another field, which is this Ethernet packet builder. The struct just keeps growing. We get the Ethernet packet onion, an even bigger onion, and we can now return it. OK, so we know how to construct these things, but how do we actually use them? How do we actually use them to create a buffer and serialize into it? So the way that we do this is with the serializer trait again, and there is a method on the serializer trait called serialize. I'm good at naming. The serializer method is responsible for producing a buffer. So when you call it, it doesn't take an existing buffer. It actually creates and gives you back the buffer. And it might use a pool of buffers or it might use something that's already allocated. Implementation is up to the implementation of the trait, but it gives you back the buffer. And it takes the number of prefix bytes that you need so far. Remember that all of this is sort of recursive. So if I say I want to serialize an Ethernet packet, it says, OK, well, how many prefix bytes do you need for all of the packets outside of me? Because I don't know that. And so what you do is you start at the outside of the onion, and you work your way inwards building up the prefix length. So you start at the very outer layer, and you say, well, I need zero bytes so far. And then you go to the next layer, and you say, OK, well, this is an 18-byte header, so now I need 18 bytes. And then this next header and so on and so forth. And you just build up the number of prefix bytes that you need until you get to the inner core. And once you get to the core, now you know in total how many prefix bytes you need, and it's the core's responsibility to satisfy that request. So it has to give you a buffer whose body contains all of the stuff that the innermost packet needs to contain, and that has enough prefix length to handle all of the headers that came before it that you're going to need to serialize in the future. And so once you're in that innermost onion, and you've satisfied the request, you have a big buffer, it has the body, and so on, then you can return it. And now you start walking your way out. So remember, we went in calculating the prefix length, and then you walk out and actually serialize stuff. So you go to the first layer, serialize the first header. And then you expand the body, so now the body encapsulates the entire next packet. Then you return it again, and the next layer serializes its header, and so on and so forth. And by doing this, you make sure that you can just serialize in one shot. You allocate once, you serialize once, and you're done. Let's look at an example of what this looks like. So this diagram is wildly not to scale. Imagine that this struct here is really tiny, that body is really big, but diagrams are hard. So again, this packet, this serializer here represents the entire UDP packet, right? So it has all of the body bytes that the application asked us to send. And it has that packet builder for the UDP layer, right? That shell of the onion that just represents all the stuff that needs to go in the UDP header. And so when we want to send this, we're going to call this into the sendIP packet function, and we're going to pass it by value, right? This tiny little struct moves from this stack frame to this stack frame, but the body itself again doesn't move. We're going to shrink this just for brevity, because we're going to need to make these slides pretty big, so that thing is the same thing as that thing. The sendIP packet function then is going to calculate its own routing information, right? It needs to figure out where this stuff is going, so on and so forth. And on that basis, it's going to construct an IP packet builder for the IP layer of the onion, and boom, construct a bigger serializer, right? The IP packet builder represents the IP layer of the onion. This entire serializer represents the entire IP packet and everything inside of it. Again, send it to the next layer, so send it to the sendEthernet packet function, compress for brevity. Take this serializer and encapsulate it with Ethernet, right? So add another layer of the onion, now we have the Ethernet onion. All right, so now we've actually constructed this entire thing and we're ready to serialize it. Let's see how that works. This is a diagram that if we had a giant wide screen, would have been totally horizontal, but we don't. So each of these dot, dot, dots represents the entire next line, right? Just imagine that this is one big, long thing. It's one, you know, small, but conceptually big struct. So the first thing that we're going to do is call serialize0, right? The outermost layer is the outermost layer, so we don't need any prefix bytes ahead of it. And in order to figure out how many extra prefix bytes we need for this layer, when we call serialize into the next layer, we have to figure out how many bytes the Ethernet header is going to take up, so we ask it. We say, dear Ethernet packet builder, how many bytes will you need? And it says 18, so we go, okay, great. We needed zero, now we need 18, so we need 18 in total. And again, the next, the next innermost packet, or sorry, the next innermost serializer, this one for the IP layer, goes, okay, well. I'm supposed to give this person 18. My layer itself, the IP layer, right? The IP header is 20 bytes long, so I need an extra 20 bytes. So now I need 38 bytes in total. The UDP layer adds another 8 bytes, so now in total it's 46. But now we're in the middle, right? We've actually gotten into the inner serializer, and there are no more serializers to do. We can't recurse anymore, so we actually have to allocate the thing. So first we have to figure out how long the body is, so let's say it's 1,000 bytes. And now we know that we need 1,046 bytes, so we allocate them. Again, we'll stack allocate later for the time being we're doing it on the heap. We say, please give us a buffer, which is 1,046 bytes long. And that buffer starts off entirely as prefix, because there's nothing, we haven't serialized anything yet, so the body is completely empty. There's nothing there. So first we serialize the body that we were asked to into the buffer. So now the body of the buffer is the same thing as the body of the UDP packet. And now we can start rewinding back out, unwinding out the stack. So first we go to the UDP packet builder and we say, hey, you know how to serialize a header. Can you serialize yourself into this buffer, please? And it does. And it expands the body so that now the body contains both the UDP packet body and the UDP packet header. It's an entire UDP packet. And now we're done. This call to serialize has satisfied its constraint. It was asked to give a buffer, which contains the entire UDP packet, and 38 bytes of prefix. Well, we have that. So we can return it. Now this next call to serialize does the exact same thing. It goes, OK, great. I have a packet whose body is what it needs to be already. There is enough prefix space. So I will now serialize the IP layer in place. And again, now the body contains the entire IP packet and has 18 bytes of prefix space. So we can return that. Finally we can serialize the ethernet layer. And now the body contains the entire ethernet packet. Thank you. Yeah. And so we're entirely done. And we can return this thing. And now we have a buffer. So we talked about parsing. We talked about serialization. Let's put it together and forward. So the goal this time is, first of all, no heap allocation. We're actually going to allocate on the stack. We're going to receive an ethernet packet. We're going to parse it. We're going to parse the IP packet inside of it. We're going to decide that we want to forward the packet. So we're going to reserialize it and send it to somebody else. So first we have to update the header. There are some modifications that we have to make to the header. Then we have to serialize it in a new ethernet packet. All right, so let's get into it. So parsing is super easy. We saw in the parsing section how this works. This is going to be super straightforward. Again, we have the buffer. The actual bytes of it live on the stack. We send it into the receiveEthernetPacket function. We parse creating one of these ethernet packet objects, which consumes those bytes from the prefix. We drop the thing so we can continue operating on it. And we send the buffer into the receiveIPPacket function. Again, we parse this time getting an IP packet struct. And so now we have to modify this stuff in place. So this is where it gets interesting, because the modifications that we need to do operate on the sort of logical structure of the IP header. So it's really important that this header is structured and gives us access to struct fields and stuff like that for us to be able to update it. And so the updates that we're going to do are actually methods on the IP packet object. So here's some code. It's very simple. We just say packet, please decrement your TTL, just like a counter of how many hops the packet has gone through. And that just vanilla rust code, right? It's struct.field minus minus, or minus equals 1. Sorry. And yeah, so it's just normal rust code. So now we have this thing. Now we drop this thing. It's a little bit of a problem here. Does anyone spot the problem if we want to serialize the IP packet? What is the contents of the body? Sorry? It's not modified. Well, so the body hasn't been modified, but importantly, the body is the body of the IP packet. We don't want to forward the body of the IP packet. We want to forward the whole IP packet, including the header. But we just parsed the header and threw it away. So there's this cute little hack that we do. It's very straightforward. It's called undo parse. We say, how many header bytes did you just parse? Oops, please undo it. And so we do. It just takes the pointer and goes, oh, you just parsed 20 bytes. OK, oops, bump it 20 bytes. So we go from that to that. And now we have an IP packet that we can forward. And so let's do that. So we decrement the TTL. We figure out how many header bytes we need to undo, and we undo them, and then we serialize them. Very straightforward. Now, you might be wondering, I thought that sendEthernet packet was supposed to take a serializer. Well, it is. Here's the core idea that I want to get across. Buffers are serializers. The serializer trait simply says, can you give me a buffer that has these properties? If the buffer already has those properties, then it says, yes, here I am. It just returns itself by value. The serialize method you might have noticed if you were looking closely takes the serializer by value, takes it by ownership. So you basically take this buffer and you say, buy buffer. Give me a buffer. And it says, I'm already here. Here I am back. And that's what we're going to do. So first we take the buffer. We send it into receiveIP packet. That receiveIP packet function, sorry, excuse me, we're in receiveIP packet currently. We send it into sendEthernet packet. SendEthernet packet then says, OK, great. Here's a serializer. I need to now encapsulate it inside of Ethernet. So we do. Construct the serializer. I'm going to squash these stack frames just for brevity. So then the sendEthernet packet function says, OK, great. I've got my serializer and I need a serializer. So we call serialize of 0. We want to serialize this thing. It says, OK. Ethernet header asks for 18 bytes in the prefix. So we'll ask for serialize 18 of our serializer. Now the buffer is the serializer. So the buffer looks at itself and goes, well, I already have 18 bytes. So we're done. And it returns itself by value. And the call to serialize 0 goes, OK, great. I have a buffer that has 18 bytes of prefix space. I can just serialize this header directly in place. Then we have finally a buffer, which is still on the stack. We haven't ever copied it. It contains all of the bytes of the Ethernet packet. And we can just send this thing. So we've gone from receiving the packet on the stack, just in the stack. It parsed it all the way. We've decided that we want to re-serialize. We've gone, hey, we already have this buffer space. Why don't we just reuse it? And we reuse it. And finally, we can turn around and go, OK, great. Let's just forward it out. So this is what forwarding looks like. And this is sort of like, this is it. This is as simple as it gets. There's obviously, in practice, a little bit more complicated logic, because we have footers and other stuff like that. But this is fundamentally the core of the algorithm. And in none of this stuff is there any unsafe code. We'll talk about unsafe in a second, but none of this has any unsafe code. Thank you. Thank you. Thank you. All right. So that was all about how we do parsing serialization. I mentioned at the very beginning that our header references are actually struct references, which you might have said, how the hell do you do that? And the answer is unsafe. So this diagram actually looks like this diagram. It's just like, that's a header reference. So the way that we do this is with an unsafe marker trait called frombytes. Frombytes, if a type implements frombytes, it promises to satisfy the following properties. First. Oops. Any size of t bytes, any sequence of bytes, size of t long, is a valid instance of this type. What that means is that I can take random bytes that I got off the network and just go, this is a thing now. I can just treat it as a thing. This is not true of all types. If I take a random sequence of eight bytes, and I say this is a reference now, that is not guaranteed to be a valid reference. It might just point randomly into memory somewhere. So it's important that this only actually applies to a subset of types. So we have to be very careful about which types we apply this to. For composite types, like structs and arrays and unions, all the fields or the elements have to be just recursively frombytes. It just composes nicely. And then if you really want, for enums, they must be C-like and have a power of two number of variants. Exercise to the reader why that's true. All right. Finally, we have a custom derive for this thing. So if we just slap derive frombytes on something, we've got a custom derive that will analyze your type and go, that's not frombytes. What are you talking about? Or yeah, that's fine. We'll go ahead and emit the impulse. So that's one thing. The next thing is the unaligned trait. So the unaligned trait says that something is. This is fantastic. Yeah. So the unaligned trait says that something is unaligned. Fantastic at naming. And so you can slap a derive on something and say, great, this thing is unaligned. Cool. It's important to note that this has some little wonky implications, such as this ether type field really should be a U16. But it can't because U16s are not unaligned. They have an alignment of two. So it's a little bit of like you have to use some byte order type stuff. It's super simple, but it's kind of annoying. But anyway, so it has to be unaligned because we don't know where in the buffer we might be parsing something from. So once we have those two things, we can build this awesome function. Now, this function doesn't literally exist in the code base, but there's something very analogous to it. And basically what it does is called ref from bytes. You say, here is a sequence of bytes of the appropriate length. If it is not of the appropriate length, then we return none. But if it is of the appropriate length, then it just gives us back a reference to T because the type system already took care of making sure that all of this stuff was safe. So you don't have to do any work at runtime. Like ideally, this function compiles to nothing. This function shouldn't actually ever run. But it's a place for us to write the unsafe keeper. And what it does is it just says, here's bytes. Here's a T, cool. Everything is safe. The compiler verified it for us. What this allows us to do is build this really cool function which uses one of my favorite, favorite, favorite methods from the entire library or from the entire code base. So this is actually the parsing function that we really have in the ethernet stack. It's stripped down. There's a removed some error handling. But we have this function, this method called takeOBJ, which I just love. It is a method on buffer, and it says the following. Consume size of T bytes from the buffer, add them from the body to the prefix. Reinterpret them as though it was a reference to T and return that reference. And so parsing then just becomes, OK, you have a header. It is a struct. You have a header struct. Cool. Take it. Done. That's all parsing. So this is literally what the ethernet parsing function looks like. Similarly for all the other parsers, it's basically just take a struct, take a struct, take a struct, done. And from here on out, by the way, this is a structured reference. So now we can just access it. We can say header.srsmack or whatever. So that's all I've got. So the conclusions from all of this. Again, as I said at the beginning, in C and C++, you can absolutely do this stuff. It is done. If you look at most high performance networking stacks, most of which are written in C and C++, they do all of this stuff. But they just do it unsafely. All of the guarantees that I described that are given by lifetimes and references, immutability versus mutability, the front-bytes, unlined stuff, all of that is just reasoned in the head of a program. And sometimes it is so unsafe that it can't be done. Some of you probably know what I'm referring to here. Famously, Parallel CSS layout computation in Chrome was attempted a number of times and was never done because they just couldn't iron out all of the bugs in Parallel C++ code. It was just too hard. Of course, it exists in Firefox. Why? Because it's written in Rust. And yeah. So the takeaway that I want to leave you with is that safety brings speed and developer friendliness. When I'm working on this codebase, I can move quickly. All of the other developers that I work with, they can move quickly because we're not afraid that we're going to accidentally invalidate some subtle invariant on a list of invariants that isn't actually written down anywhere. And it's developer friendly. Because when we get new people to the codebase, in fact, I think a majority of the developers that we have on this codebase, this is their first Rust project. And what that means is that they can come to this project and they can say, OK, this little from byte stuff is terrifying, but the rest of the codebase is fine. It's all safe. There's no unsafe. I don't have to worry about keeping all of these invariants in mind. I can just hack on the thing. And they do. So that's all I've got. Here are some code links. This last one here doesn't actually exist yet, but it will hopefully by the time this stuff is up. There's my email if you want to email me about anything. Thank you very much. Appreciate it.
What makes Rust different is not that you can write high-performance, bare-metal code. What makes Rust different is that when you write that code, it is clean and easy to use, and you are confident in its correctness. In this talk, we discuss a new, high-performance networking stack being written in pure Rust. We discuss how Rust has allowed us to squeeze every last drop of performance out of the stack without sacrificing usability, productivity, or the confidence that our code is bug-free. We focus specifically on packet parsing and serialization, which we accomplish with zero copying, zero heap allocation, and very little unsafe code.
10.5446/52164 (DOI)
All right. So, everybody, to New Francis Wafili as you introduced. We're going to be talking about Percy, which is, as stated, a library for building isomorphic web apps in Rust. A quick warning for the beginning. A lot of this stuff is experimental. There are areas where it works pretty nicely. There are areas where it's a little verbose, a little inefficient, still working on that. But hopefully, the direction is somewhat interesting. So, we talked about a few things. One, what is Percy today? How are we targeting Wasim? What does it look like? How does it feel? And since this is a Rust conference, and I'm assuming a lot of you care about Rust and Rust details, we'll talk about it with more of a technical focus versus a bird's eye view. So, a person I was thinking of is I took a bunch of screenshots of a bunch of different code that compiles the WebAssembly, and we'll sort of just talk through how that works. You'll hopefully walk away with a solid understanding of how and why things work the way that they do, and then sort of the future that will hopefully come from that. And then at the end, time permitting, we'll maybe take a couple minutes to look at a quick demo and make a special surprise. Okay. So, before we talk about Percy, I have to shout out a couple crates, which Alex is here somewhere. There you are. These do almost all of the work, and then I just kind of use them. So, thank you. So it's Wasim, BindGen, and WebSys, which is in the same cargo workspace. And I won't try and explain all the technical details, because the person that wrote it is actually here. But in short, these help you generate a lot of the bindings that allow your Rust code to talk with native browser APIs. Today, WebAssembly code doesn't have any access to things like the DOM directly. You have to talk to JavaScript, who then talks to your host and does everything for you. And the future that will change, and Wasim BindGen is built for that future. So we're sort of headed in the right direction over time. Not quite there yet. And this is Percy. I wanted to call out this piece right here, which is that it's mostly Rust, and that's sort of the theme. So, yay. And what was the motivation before we dive into the technical details? So I'm working on a Rust plus WebAssembly WebGL game. So while doing that, I realized, wow, I love Rust, and I want to use this for everything. And so I needed something to build websites with. There was another library out there that I found that looked really interesting, but it didn't have server-side rendering. A lot of people here probably aren't familiar with the web, because a lot of you write C++. I don't know how, but I'm learning. And so that just means that on a search engine, if you're searching for a website, Google needs to index it, so it needs to know what's on it. And nowadays, I can kind of do a good job of indexing things that aren't rendered on the server, but your best bet is to have the server send down the content. The crawler sees that, can index it just fine. And so we needed a library that can render on both your server and on the client and the browser. And so that's how Percy was born. This was the real motivation, that's one of these Rust. So this is just a screenshot of something I'm working on now using Percy. It's a website for my game, so HTML, CSS generated from Rust. And it's powered by sort of real-world development where I'm going to get my timer up, work on something, sort of run into issues, fix them as we go, right? Pretty simple. So I think we're going to get to code soon. Cool. So let's start diving into sort of the pieces that power this, right? So if you've written HTML before, it's basically similar to XML. You define some node, and then this node can have child nodes. And then based on the attributes, properties, whatever you want to call them, you specify, the browser will interpret this into graphics that it eventually renders. And so when writing web applications in Rust, while you could sort of just hand code the data structures that end up leading to these HTML strings, you probably don't want to. And so you need a macro to do that, right? So this is one of the first key pieces of Percy. It's the HTML macro. So you can see we specify a div with an ID property, a class property. And then inside of that div, there's a span, which has a text node that says, hey, with the little smiley face, and a button, which has an on-click handler, which is pulling some, or setting some cell with some value, sort of all the way down. And so you write HTML, obviously it doesn't look exactly like you would if you were, say, in a regular HTML file. There's these weird commas everywhere, which I will need to talk to more procedural macro experts to see if I can figure out a way to get rid of those. But for now, they're there. But it's mostly similar, and you sort of pay a little trade off with some slight unfamiliarity, but on the good side, you get all the type safety, and it's really hard to write things incorrectly. And at the bottom, we'll talk about this a bit later, you're seeing that we're rendering toString because our HTML here is just a regular struct that implements display, so you can render it to a string. And that will power server-side rendering, which we will get to. So we saw that HTML, right? But what does that create? So at the end of the day, that ends up being a macro that generates a virtual node. And so a virtual node is a tag, so a div, span, bold, it's properties, so ID, class, data, hyphen, foo, whatever sort of attribute you want, events, on click, on mouseover, and then children. So a child is another virtual node, and it turtles all the way down. And then optionally, it can be a text node, so as you saw before, there was that hey. That's a text node, which in the browser web world is different from an element. You have to handle it kind of weirdly, but as long as you sort of know the APIs, you can make it all work together. So a virtual node can represent either text or a regular element, and sort of based on what's set here, we'll know how to eventually turn this into a real DOM element, which we got to. And as the title says, this will start the behind everything, and so your HTML generates this, the different patching virtual DOM algorithms use this to then update the DOM, and everything is really built around the sort of virtual node concept. And again, the nice part here is it's just some data, and then we have methods later to use that data to render in the browser, which we'll get to. So the first piece of it is the diffing algorithm. So if any of you are familiar with React, which kind of popularized this concept, you will have two different virtual trees. On the right, there's a bunch of test cases, maybe a div and a span on top. And then when we're diffing those, what we generate as our diff is a vector of patches that will get applied to a real DOM element. For something simple like this, there's only one patch, which is a replace patch, and so that's saying we're going to replace the first node with a span. And what isn't shown here is the test that sort of makes sure this works, but hopefully you can assume that it does. And the second one, we're replacing the second node, which is index one, so a bold, the b slash b, with a strong, right, and so on. There's replaces, you can append children, you see it at the bottom. And so how it works, and this might be familiar to you if you've written other maybe JavaScript front-end libraries is in your application code, in your application crate, you are generating these virtual nodes, you then diff two of them, as you can sort of see here, and then you'll get these patches, and then later on in the browser, you'll go to your real DOM element using wasm bind gen and say, okay, I want to apply this patch, I want to replace this node, I want to set this attribute, and then your browser is updated, which you'll see a visual of in the demo, hopefully. And so that's the diffing piece. The patch piece is, again, just powered by regular rust trucks, right? You can append children, so you have a DOM node, and then you append other elements to it. You can truncate children, so you have a DOM node with some children, and let's say that your new DOM state no longer has those children, there would be a truncate patch. There's a replace patch where maybe a div becomes a span, so you would replace that node, and then there's adding, removing attributes, and setting text. And so effectively, every single thing that ever changes in your DOM can be represented in these different mutations, and you just have a vector of them, and you apply them. As we sort of go farther along and start to optimize things, the underlying implementations of how we handle these patches will probably change or be cached in certain ways, et cetera, but effectively all changes to the DOM look something like this. So, as you can imagine, when you're dealing with something so new as combining Rust to WebAssembly, you can bang your head for a while, things might not work as you expect, and it's kind of very important that you have high test coverage. So another thing that I didn't write, but I get to talk about, is Wasmabine GenTest, where due to some sorcery that I don't fully yet understand, and again I don't need to because Alex is here, you can use Gecko driver, Chrome driver, Safari driver, compile to WebAssembly, run that code inside of one of these browsers, and then you effectively have a Rust test harness running. And so, probably should have gotten a screenshot of output, but I can run all of these tests, they run in the browser, I'm dipping and patching real DOM nodes, and then I can assert to make sure as we're doing kind of on the left here that I get some node in the DOM that matches what I tried to patch. And so, it's not like theoretical, this works in memory, and I'm hoping it works in the browser, we can actually test against the browser using the awesome tooling that exists. Unit testing is another focus, where one thing that I quickly found is waiting a few seconds for your code to recompile and then refreshing your browser is incredibly frustrating, and makes you want to flip your laptop over. And so, that quickly inspired testing tooling, and so, what's kind of highlighted here is, again, since all of your HTML, your entire web application is just a bunch of virtual nodes that have virtual node children, you can define methods that let you access different virtual nodes, and so here, we're like rendering some view, we're looking at the children where they're asserting that there is a child at index zero with certain text, and sort of making sure that all the rendering works, and that would work for any component that you're working on, and so, because it's all, again, just data behind it, you can implement whatever method you want and test against whatever you want, so testing actually ends up being very easy and you don't have to refresh the browser and get upset. So, one other interesting piece of Percy that is getting worked on a bit now is CSS and Rust, so here's a highlight of it before we sort of talk about the underlying implementation details, which are probably a bit more interesting, so at the top here is like a quick screenshot from some like very simple web app that we'll look at towards the end, and as you can see, there's this beautiful gradient, which I did not take from Yahoo, and I did not just change the colors, and at the bottom here's sort of the code that powers it, right, so there's this CSS macro that has this align items, you know, the background has a linear gradient, and then, what we'll sort of do with that is there's a procedural macro that runs a compile time and we take that CSS, we'll grab it out, there's a global counter that says how many of these CSS blocks have we seen, so this is the first one, right, so we'll generate a class called CSS0, as you can kind of see in the example on the right at the bottom of the test cases, so there's a class called CSSRS0, and that will then have all the CSS that you defined, then we'll get CSSRS1, that will have CSS and we'll add sort of all this into one large growing string, we'll write that to a file that you specify with an environment variable called output CSS, and so at compile time, a procedural macro grabs all of your CSS calls, generates a bunch of classes, and writes them to a file, and so then when you're running your application, you are only sending down, I guess, class names that your virtual nodes are using, and the CSS will obviously have the same class names, and it works, and so you can write your CSS right next to your REST views, which is especially useful for people who want to sort of publish things to the web, but you don't want to learn a bunch of stuff like LESS or SAS or all of these tools that compile down to CSS, you can just kind of write a little CSS block right next to what is using it, and it works. I'm using this in production right now, and it feels very nice, the reason that I know that I like it is when I start from scratch, I still go in this direction, versus sort of walking towards something that I'm more used to, so I'm going to keep exploring that, and this is just another example of it where you have your nav bar CSS here, which will eventually just get turned into a static string CSS RS0 that we looked at before, and then here you have a div with a class that has that same static string as the class, and the macro will parse that into the class property in the virtual node, and so then when you render this HTML in the browser, you end up with a div with the right class, and then you're sending down that CSS, and it sort of hooks up together. So that should be CSS, and so one other sort of piece that powers the puzzle that we'll look at at the end is the router. So actually for this piece, there are a lot of routers that are really interesting that are being worked on now, but they're mostly for back end applications, like I know Rocket has one and other back end frameworks use routers. The not necessarily issue, but difference is that front end routing is largely similar, but also has very different concerns, like there's no authorization, but there aren't any headers, you're not dealing with sort of a request response cycle in that way, and so you need something that, yes, matches paths against incoming paths, but has whole different guards against it. And so there is definitely a lot of room to make more progress on the routing side. What we do have now so far is sort of a type safe router struct in a route struct, where I might specify slash users slash ID, as you can kind of see on the right, right in the middle, and so we'll see that, right, and at the top our struct has, our view has an ID of a U32, and so we won't match a route that slash user slash foo near the bottom because foo is obviously not a U32, but we would match slash user slash five because that is. And so the nice part there is, if maybe some of you do web development use JavaScript, you find yourself repeatedly going through a bunch of if statement hurdles to make sure that you're matching against the sort of right input that you want and handling all the other stuff, whereas due to rest type safety, you can just completely trust that you will never have to handle anything that should not be there. Not a security expert, don't trust me there, but you will usually not have to handle anything that should not be there unless there's some weird, you know, edge case where someone's giving you a really big number or something. And then so that sort of looks like this, where you have a router which has a vector of routes, and then you would add routes to it by pushing them. And then eventually you call a view method whenever you change, I don't want to talk too deeply into sort of the web, but in your browser URL, you have sort of the path that you're at, and you'll match that against your router, and you'll say give me the view that's associated with that, which is an option against the box sort of trade object, you'll get back the view, you'll render that view, and then you'll display to the user what they should see. And then sort of on the right side, there's the actual route, which is your route definition, so that might be like slash user slash ID. The parameter types, as we looked at before, ID might be a U32, might have a string for another parameter, etc. And then your view creator function, which takes in all the parameters that your view was provided, so an ID with an unsigned int and everything else, and then generates a box view that you can then render. And a view is just a trait that Percy provides that has a render method, some other niceties. So that's a lot of the underlying pieces of it. Some of the longer term goals are to be able to use Rust and WebAssembly for WebDev without using a lot of concepts. And so a lot of the sort of philosophy behind the tooling that Percy provides is not needing to learn a bunch of terms that we've made up. And so you kind of, there are industry WebDev standards now, like a virtual DOM in different patching, and we try to sort of reuse those and not create new ones, which can lead to not inventing really interesting things, but that's kind of the point. I like when my stuff is boring. And so the goal here is to be sort of the underlying tooling that can power other frameworks that have these interesting ideas that might require a bit more of an investment if you want them, but if you don't, then you just use the underlying tooling or swap out other things. Another piece of it is to have this be heavily trait-based. And so one of the beauties of Rust that I found so far in my short time using it is the trait system where, let's say the virtual node right now is its own struct. So you kind of can't really use a lot of what Percy offers unless you're using that virtual node. But as we move that to instead be a trait, then someone can come and say, wait, this diffpatch algorithm that you've implemented is terrible. I have a better one. And then you can just go and use that without having to rewrite any of your application because it implements the same virtual node trait that you're already using. There's also the Percy book, which we've started writing using MD book. Another shout out to Watson-Bindgen, because I just copied your Travis YML and got that working pretty quick. So one other piece before I think we've done it to the, the demo is contributing. So if you're interested in doing more WebAssembly stuff, obviously I'll be here and we can talk about how more of it works under the hood. But the best way to sort of contribute is the WorldWorld-driven development way, which is try and build something for the web in Rust. Open issues about the stumbling blocks that you run into, what's not fun, what didn't make sense, and then we'll sort of fix those one by one. And eventually, hopefully, there are a lot of options out there for you to build real web applications with Rust. And demo time. Where's my police clap sign? Okay, so I have a really, really basic web app that I threw together here. What you kind of saw in the screenshot, and we're going to add a piece to it in the few minutes we have, right? So I'll zoom in a little bit. So very simple. There's that nav bar that we rendered. Oh, yeah. How do I do that? I'm also working on a PHP to Rust compiler. No, I'm kidding. Command what? I'm going to go command help search. Straight up disappeared. Oh, there we go. Sorry? What is that? Oh, dear. View, okay. We've got some help from the audience. Oh, okay. Somebody's smart out there. So that nav bar that you saw, if I can type properly. There you go. Yes, perfect. So the nav bar that you saw sort of looks like this, where presentation mode is a little weird, but I'll get used to it. You have a nav bar struct which has an active page and some reference to a store which holds our state and that's how you sort of get access to different state. It implements view, which has a render function. And so we'll get the nav bar item, which I'll go back to so you can actually see it, but we generate that by passing in the path that it will link to, which is slash contributors, the text for the nav bar item, which is contributors. Some extra styling, which is in here margin left auto. And then we will render our nav bar with our home button and our contributor button. And then if we go back into the browser, you see the contributors here, which is just that component that we rendered and embedded. And then the home, which just links to I somewhere for web app. So if I click on this, as you saw, it said slash contributors, so it goes like the contributors route. And that looks beautiful, and then this would come back here. And then another piece of it is, there we go. So again, since you're in rush, you have everything that you'd expect, right? So there's the state struct, which for something that's a nav bar and two buttons, obviously doesn't have a lot, but there's a click count, which is a reference count itself. And we can render based on that click count. So if we go to the home page view, you can see here that we'll use the click count in our rendering. And then if we increment the click count with this button, the click count will increment in the browser. And so one piece that we'll quickly add while there are a few minutes left is the special friend, which right now just increments the click count, but instead we'll make it show the special friend. So how we'll do that is we'll come back into here, I'll go to the correct project. And first we'll add a new message type. So this is just an enum. I've been calling it enum forever until I came here, and a lot of people look to me crazy. So I guess it's called an enum. I was right? I told you. So we'll add this, and I left a comment at the bottom in case I forgot, but we'll add this show special friend variant, right? And then we'll go to where we handle all of our message variants. And if you haven't written something like react before, or if you have rather, this is kind of like state.dispatch where you have messages and then you'll have handlers for them. So we'll go to our state here, and then somewhere in this blob we handle messages, right? So there's this incoming message for showing a special friend. And that'll call, I also threw this at the bottom, because I forgot. So that'll set state.showSpecialFriend to true. So we're onto something. So now whenever we send a show special friend method, our state store will grab that and it'll say, what do we want to do? Okay, we want to set show special friend to true. So now if we go back to our view for the home page, here we have that show special friend button that we saw before, right? You see the text show special friend. It's sending a click message to our store, which wraps that state. Instead we'll make it send a show special friend message. And I will grab this so I don't have to think and talk, paste it here, delete this, hit a little of that, and then we are setting special friend to be a different component if it's true. So you'll see there on line 33, our special friend is just an empty div. This is starting to sound creepy. But if show special friend is true, so if it returns true, then we'll instead show some image, which I recognize how creepy this sounds at this moment. And so now if we just compile this, and I didn't make any mistakes, yes, so the server actually takes a few seconds to compile now. Whoever was talking about Aktix earlier, I'm using Aktix web, so let's talk. But it should be listening on port 78, 78. So let's make this 1,000, refresh, click a bunch of times, and show our special friend. Cool. So the last piece that we'll talk about really quickly before I go is sort of a very high level overview of how this works. And again, if you want to understand it a bit further, we can talk out there. But essentially from top to bottom, oh man, like that voice in my head. So we have an Aktix web server here, which will receive a request, and then it will return index.acmail. So what does that look like? So that is here where we have just some HTML file that it's reading from disk, and on line 11, it will insert HTML from the server. So it will render the application to a string on the back end because of server-side rendering. It will inject it into this HTML file and then send that down. So right away, even if you had JavaScript and WebAssembly disabled, you'll see something. Then you'll see at the bottom, on line 22, there's this bundle.js script, which, and also this isomorphic client.js script, which effectively these two scripts run. They'll initialize the WebAssembly, and so the client gets this index.acmail, and then it sort of starts writing WebAssembly, and WebAssembly takes over from there. The problem is, let's say that you're, as you saw with that sort of init thing that we did, we can say init2000. The server will see that. It'll send the page down with 2000. I will switch to the right project, and there we go. So you see, it looked at the query string, it got the init flag, it unwrapped it, and then somewhere over here-ish, it will create an application with that initial value, right? So then how does the client know what to do from there? How does it not just start from zero and then your page quickly flashes? So that's where an index.html, there's the nice part of the initial state JSON. So on the server, we'll replace that initial state JSON with just a JSON string of the initial state. You could also use any other serialization you want, I'm assuming you're using Serde, so you could have like a binary encoded thing or something. And then on the client side, since this is all REST code running, we're also using Serde to deserialize that back into your state struct. And so all you do is just send down what you want initially. The client, when it first starts on the WebAssembly side, will see that, deserialize it, and then have the exact same state that your server started off with. And so that's how you're able to seamlessly pick up with the same state that your backend had, and you don't just see the number change from like 2000 to zero. And I will just show you where that happens, and then we're done. Boom. Yes, it is all on GitHub, and add some contributions, use it. There's a bunch of instructions on how to get started. And so the last piece is here, where we'll pass in that initial state string that we read from the browser, create an application, add a subscriber so that any time state changes, you can do something, in this case we call some update method to update the DOM, and then we return this client to the JS side, and then the JS just starts it and you're done. That should do it.
Percy is a modular toolkit for building frontend web applications in Rust. It supports serverside rendering out of the box. In this talk we’ll dive into how Percy works under the hood, starting with a look into its Virtual DOM implementation. You’ll hopefully walk away with an understanding of the pieces that power Percy - as well as a good sense of how to get started with using Percy to build your own web apps.
10.5446/52166 (DOI)
Hi, everyone. Does it look okay? Hello? Okay. Hi, everyone. Hi. Today, well, let's say the past few days is a bunch of fest for me. I've never been at a conference. That means I've also never talked at a conference. So you have to be forgiving, and I've never been outside Africa. So... So I'm actually quite grateful to whoever decided that my talk is worth being here, because it means a lot to me. So anyways, to the talk, right? Regarding convenience. It's a constant question that we tend to ask ourselves. What's the right amount of convenience? So the answer depends on your background in an outside of Rust. So I mean, people can adapt, but it's complicated by trying to please all of those... The people with all those backgrounds. People brand new to their language. People have never programmed before. You know this thing that there are people who... There's people who try to do Rust as a fest. Actually, I'd like to be people like that. What their experiences are like. It's quite interesting. Somebody who's never programmed before and then they go for Rust, which I find complicated and I've been programming for a while. So it's like, wow! Anybody who's used Rust as a first time programming language in the room? Ah, cool man. We should. I'd like to... That's cool. Okay. So that's cool. So yeah, what's the right amount of convenience? My thought, please speak to me after... What do you think about my thought about this, right? So to me, the right amount of convenience is something that makes tedious things less so without hiding things in a way that surprises anyone. So to me, that's the right amount of convenience. I don't know. I'm sure there's people who've thought a lot more about this problem than me. So it's going to be cool. But anyway, this is not what the talk is about. It's more about exploring those conveniences, really, with examples. So yeah. Anyways, let's move on. About me. My name is Trapang Lecuncube. As she said. Does it work? Yeah. I have a different view. Okay, anyways, I studied electronics formally, but I changed careers. Actually, one of the happiest days of my life is I applied for a software job. I used to be an electronic technician. I used to work for a defense company. And what we used to build is surveillance systems. So I was at the end of the production line where I actually tested it. It's quite a long process because it's military great stuff. So like, hey, okay. But I was not happy. It was not challenging. So I got out of there. That's why it was one of my happiest days when I got appointed. Somebody took a chance on me. Because I was involved in open source. So I guess the passion, maybe the person liked the passion because I was contributing. Lots of documentation, some C projects and things like that. So I applied and then I got in. So it's like, wow, I'm a software developer. Yeah, so one of my happiest days. And then a few hours ago actually, I thought, wow, actually, one of my other happiest moments is when my talk was accepted. I couldn't believe it. What's wrong with everybody else? What's good about my talk? I mean, really, okay. Anyways, I took a chance. I'm here. So anyways, yeah, I was quite a Python fan. I did quite a lot of contributions to the documentation as well. But yeah, for some reason I lost interest. Maybe I lost interest in Rastounos in 35 years, but I'm here now. Anyways, yeah, I work at a small company, Johannesburg. There's not a lot of rust in Johannesburg. And my boss, my boss looked around. Well, yeah, he lives in Johannesburg. And because there isn't a lot, he noticed me and then contacted me. And he gave me a project that I did. Well, it's not really running in production for whatever reason. But anyways, yeah, he took a chance on me and then he got me hired. And yeah, anyways. Yeah, okay. Let's move on. So why did I miss anyways? Oh, yeah, well, that's my Tita handbook. Okay. Back to the talk for real now. Okay, let's talk about looping. Let's say we want something that produces this sophisticated output. There are options. There's many ways to do things in Rust, right? So you can do it this way, right? You got an array, three elements. We get the counter. We'll see why it index. That's pretty similar to how you do it. You didn't see right away. You actually keep track of the index and stuff like that. I mean, there's a problem, right? Because it's error prone. If you get something wrong, you'll get the panic. Well, at least in Rust, you get the panic, right? Anyways, yeah, so what do you do? You check the index. Yeah, so basically, yeah. So if your index is the same as a count, you break out of the loop, right? And what do we do there? We just print. This print is what produces this. So you print line three index, index increment. I don't know. I'm not really explaining this well, but anyways. So things become a bit better if we make use of the well keyword. Because a loop is simpler, right? It doesn't take an argument. I don't know what you call that. That thing that comes. What do you call, for example, index is less than count? I don't know the name. Is it an argument to a keyword? A conditional. OK. Yeah, so loop is just basic. It's like basically loop forever unless you break, right? And then we got this way. We don't, yeah. We got this. We got a bit of help, but it's not much of an improvement because you're still keeping track of the index. So we just basically, we just moved the break out of the loop decision elsewhere. It's still there. I mean, the IDs in those lines is still there. So things are becoming much better now. You don't keep track of the index. You got your array there. You create an iterator. And then you use the while loop again. But this time iterating through, well, going through an iterator instead of just doing a conditional. So there you go. Well, let's go next print. So the cool thing about this is it will just stop for you. That's why you don't, means, yeah, you don't have to keep the index. So, and more sugar. We have the four key where that takes care of that stuff. So it practically does this for you. And the much simpler code, right? And then there's another way of doing this. It's not exactly sugar. I just wanted to put it out there. I also like it because it's cool. I can pretend that I'm a functional programmer a little bit. So there you go. And it's also, for each, it's actually sort of new. I think it only got accepted in stable rust some months ago. Maybe two or three releases ago. I'm not too sure. Anyways, so we are done there. Let's move on to the next thing. We are talking about range. So, similar to the last time. Sophisticated output. So how can we get this output? It's another way of doing it. There is this type called range. Inclusive. It's not so much used there. Not like the range one, but I like it because it's, I like it because it's more obvious. I think somebody who programs for the first time is going to look at this. If we are seeing one to three, why are we not counting three? That's why I like it. So you can do that or you can just do this. It's another convenience. It's kind of a weird syntax. I actually prefer the triple dots that people didn't like because they said, no, it doesn't look too different to the two dots. Come on, man, but well, I'm not. I'm not everyone. So, well, there it is. Anyways, that was a small one. Method calls. This is quite interesting because I learned about this recently, actually, while preparing for the talk. Anyways, so we have sort of a useless type. Just created two make-up points, right? So what's happening that we have a struct, right? We, and the struct has three methods. One is new. New means just created and increase. And current increase just increment the counter. So make three means you are going to create a counter that can only have a maximum of three. It ends at the number three. So this is the desired output. Less sophisticated than previous examples. So, yeah, on the left there, you see these two columns, actually. On the left there is the loop counter. So you can see first loop is zero, one, two, three, four. Goes on like that, right? So how do we implement it? We can do it this way. So remember one of our three methods, new. We create an instance of this type, this makes three types. And then we use that weird syntax. We can account from zero to four. That means that's five iterations, right? So what do you do? You take the type, you increase. Yeah, for each loop, basically, you're going to increase this type of hours. That's where you do the prints on the loop there. So basically, yeah, you have, it would actually be nice to have a laser thing, right? Does anyone have a laser thing? I have one, actually. I think about it. Okay, cool. Anyways, so yeah, you see that, should I call it template or whatever, so yeah, you're going to loop there. So basically, yeah, say you take the first iteration there. Press the green button. Imagine that, right? Take reference to that loop zero, counter one. So what happens there? We increase. And then we print that. So that's going to go in there, basically. I don't know what the correct time is. Interpolation, string interpolation, I think. And then basically, we print the debug form of the output that we get from here. And that's what you get. So you get one, two, three. And then when it gets two, three, it just stops because, oh, I pressed the wrong thing. It's just up there because it's not allowed to get three. Next three. So that's the example. So it's kind of a variables way of doing it. So there is another way. This is also explicit, but it's a little better to type, I would say, than the other one. But I mean, nobody ever does this because Rust gives you a little bit of magic. So this is pretty much, so what this says is that you borrow mutably, mixedry, because increase actually changes it, right? So that's what it means. And then here, you borrow mutably because all you are doing is just reading the content, right? So that's another way. And what everybody uses is this beautiful magic. So you ignore all of those things. What I mean to really do is stuff like that. But it would be good to know in the back of your mind what is actually happening, which is one of my motivations for the talk. So I actually wanted to know, sometimes I actually wonder what the hell is going on in here? Why all of this magic? But specifically this one, I never even thought. You just see syntax, but you don't quite get it. But if you know what's going on underneath, I hope it makes sense. Anyways, we're done there. So let's talk about self. So what is self has to do with, it has to be with, it has to do with this method and things like that, right? Here's a contrived type. Another one. You'll notice that distance are functionally the same, but they look a little bit different. What you'll normally see in Raskoad is this, but this is actually what is happening under the wood. Because you'll see that with every other method, you must actually have a binding, a variable. What's the difference between a variable binding and a binding anyways? Yeah, you'll have a binding itself and then you'll have a type, right? But in methods, you just ignore that there's actually those two kind of things. So it sort of magically disappears and then you get this. That's why I call it elided. Exactly the same. I mean, check. Yeah, this is just to show that the approach is exactly the same output. So increment it, you get one. Increment it again, you get two. Doesn't matter. So yeah, that's it. And this is the whole list of that kind of magic that we see. So what happens is that in your method call you just write self, but it's actually translated to this. Borel self, translated to that meta-world, borel self, it gets translated to this. That's it. Cool. So we done there. So this is a short one, arithmetic short hands. This is really to help you not type as much as you otherwise would. So there we go. Foo equals to one. Foo equals to foo plus one. Don't want to have to repeat the foo. So I actually don't, I don't actually know how to read this. In my head, I just say foo increment one, but it doesn't actually make sense. I'm going to have to find out actually. But anyways, yeah, so the point here, yeah, you're going to add one to one is two. That's basically what's the opinion. And similarly, another one is a multiplication where you can do similarly, similar thing. Instead of repeating yourself, you just say multiply equals foo, you get a 16. Four times foo is 16, right? Is it right? Four times foo is called 16. That's a good reaction. Okay, anyways, let's move on to the question mark operator. That's one of the big ones. Actually, I find it fantastic. Anyways, so if you are handling errors, so you can, you can either have that syntax or that syntax. I mean, which one looks better? So actually, somebody did mention that they wish that Rust had a, a Rails, sort of a Rails mode for beginning people. So that you can, whenever you read code like this, you can actually translate it and then it shows you the raw kind of stuff. So, I tell you an option in, when you use Rust C, I actually forgot other options, like, but it's, it produces HIR, which is human, what? Or high level intermediate representation. Just find out how to actually do it. But the output is actually quite ugly. But this is, yeah, no, yeah, it's, you know, actually, maybe it's not too hard to make it beautiful, right? It's an idea for an improvement to Rust. Anyways, so basically, it's going to take this and produce this for you. That's, that's what it is. So, because I mean, a new person is going to look at this like, what the hell is that now? That's a new, that's a new operator. And then you actually want to, it's like at the beginning, you want to see what's happening. Sometimes you want to suffer a little bit so that you can appreciate the conveniences. So, suffering means typing all of this, because this really gets tedious, right? And, yeah, so, like in this example, I mean, this thing's, it's, it's boilerplate, really, right? So, what do you do here, you read? So, it's really contrived because there's a much better way of doing this, but it's just too illustrate. So, this guy takes, takes this text file and then there's, there's errors. I mean, there's a whole bunch of errors that can happen with IO, right? So, it reads this guy into this variable and then later on, because what we actually get is actually bytes. You want to convert that into UTF-8 and strings only work with UTF-8. So, what, what you have there is a different error from what you get there. This one is an IO error. This one is UTF-8 error in case your bytes actually do not comply. They're not, yeah, they're not actually unicode. So, by the way, what's happening here? There is something that I must still figure out. I wanted to, to do something that compiles. So, basically, when I, when I, when I'm doing my testing, I return, I actually return a box. I, I return a result with a box error so that any, any error can be handled. So, you, just to, to be convenient so I can do all of this in one function instead of separate functions. But anyways, so, so writing all of this basically equivalent to this. So, you read your string and then you convert to UTF-8. So, that's your error handling. That's your error handling. That's a convenience. Yeah, anyways, there's something to do this. It's read to string. It's available in the Stada library. So, don't do it this way. Just for illustration. So, anyways, moving on. What's going on here? Something's not right. I forgot to press F5. So, something was, was realigned. Okay. Anyways, this is what it's supposed to look like. Sorry, guys. Anyways, yeah, much better, right? Anyways, moving on. We have this concept of lifetime illusion. I was, I was watching Rust before 1.0. It was quite hairy, the lifetimes that you see. It's like, wow, it's so much, it's so much nicer now because things, things, things can be allied and then it's safe. It's safe to do it. I mean, here's a simple example. It could be, you know, it should have been a very, very ugly example. But anyways, so what happens here is that you have, you have a borrowed string that you just printing. This is actually the same as you don't have to have that, what do you call it? That tick, what is this? What is this character? Yeah, yeah, the tick character. Look at the driver, I mean, this syntax, who loves, who loves this? Is there anybody? But I guess the designers didn't really have a choice. I mean, what's the alternative? Because I can't think of an alternative. Did people think of better alternatives like this? But anyways, we stuck with it. It's also weird because you have a random A sprangled, what does A mean? Maybe I should have said life, lifetime or something, or LTO or whatever. Anyways, so anyways, you don't have to have it in most of the cases actually, even when you have a return, when you return a borrowed type. If it can, if Rust can safely, safely know that you don't need one, then you don't need one. So anyways, there's another example where you can ignore the lifetime annotation, which is const. Maybe this was done last year, I can't remember. In Rust, you always needed to type this super long, let me make use this. Look at that, right? I mean, you don't have to. A const is aesthetic anyway, so why type it out? It's a static lifetime, meaning that it leaves the entire life of your program. And that's the nature of const and aesthetics, so no need to annotate it like that. And then, yeah, that's type inference and cohesion. I put them in one because I cannot, I do not always 100% know when one is happening and another one is happening. I'm not 100% on that one, so I just matched it in one. Anyways, what do we have here? We are creating a vector from a range. So what happens here, it's sort of similar to the earlier example. And then what happens here with collect? Collect basically says put all of this iteration values into one container. So make them, yeah, into one container, which is a container is a vector. And you can be super explicit and say, this is a vector of U8, which is bytes. And then say go from one to three basically. So this will produce for you, yeah, an array that has contents one to three. So you can reduce, you don't need that. Raskans is like, oh, okay, you're talking about U8. You don't have to specify explicitly, which is cool. Most of the time this works because it can be in fact what you're talking about. And even less, I mean, if one of the items is of a specific type, then it's going to be in fact that the rest is in fact or co-est. So I don't know. Or you can just fall back to in type. This thing, there was actually some discussion about this where people didn't want integer fallbacks. You have to be explicit about everything and stuff like that. And then some cool people decided, you know what, you can fall back. There's a lot of times where you don't care the exact size, the exact size of your integers. And then I32 is a safe default. So what that means is that that's I32. And that's I32. That's I32. Oh, already. My goodness. Damn, okay. Oh, my goodness. What do I skip? Okay. Yes. Okay. There's some weirdness happening there. But I only have 10 seconds. Anyways, let me move through quickly. I don't know what to skip or to talk about. Okay. I'll just let me just move a little bit faster on this. 30 seconds. Oh, my goodness. Okay. Yeah, there was a bunch of examples. There's a derive. There's a derive, for example, where I talked about. Derive is basically sort of coding sessions like macros where you want to avoid the tedium of typing everything out. For example, I mean, look at this example. If you want to do a debug print of something, you could do this. I mean, look at all of that. The way you say, I'm going to print point and then one of the fields is why another one filled is why another one is X just so that you can get this, this output or you can just use derive debug. It's one convenience. So there's another one, derive cohesions, which we are skipping. And then there is a if let. Oh, my goodness. Pretty important. I mean, look at this, right? I mean, look at that and then compare it to this. Yeah, no. I mean, anyways, okay. Sorry for being bad with time. Thank you.
The Rust compiler provides a number of conveniences that make life easier for its users. It is good to know what these are, to avoid being mystified by what's going on under the hood... the less magical thinking we have of the world, the better. Example of these conveniences: lifetime elisions type inference syntactic sugar implicit dereferencing type coercions hidden code (e.g. the prelude) With the help of examples, this talk is going to compare code with and without these conveniences.
10.5446/52168 (DOI)
So, I'm Nicholas, or Nico Mazzacchis. This is Ashley Williams. And we're going to do a talk about doing open source on purpose intentionally. But before we do that, so you may or may not know, but in the next couple of weeks, Rust 2018 is coming out. We'll see the release candidate two more specifically, which will become the final Rust 2018 release, hopefully squeaking into 2018 before the next year. We'll see. But it's going to be pretty exciting. And in particular, so 2018 kind of sums up the work. We're kind of retconning in the work, all the work we've done over the last three years into this 2018 edition. And it's a good time for us to stop and kind of ask ourselves, well, what have we done over the last three years? How did we get here? What is our trajectory? And how do we feel about it? And should we maybe make some alterations? So we're, and in this talk, we're going to be focusing a lot on not so much the technical parts of what we've done over the last three years, but on the engineering that goes into the sort of team structure and the way the project manages and makes decisions and runs itself. And so if you ask, how did we get here? Well, I don't know. I've been at Mozilla now seven years working on Rust, and I can tell you that in the beginning we didn't have a lot of structure. We had kind of Mozilla employees doing the majority of the coding, I would say, and the decisions were made kind of in an ad hoc basis on group meetings. Graden was the team lead, but he ran with a light touch, and we kind of made decisions as a group. And eventually that was becoming a problem. And around 1.0, we announced the first Rust teams. And the reason was twofold. There were all these people who had done enormous amounts of work on the project, but they weren't Mozilla employees, and they didn't have an official say in any of the decisions, really. We had no formal recognition, and that didn't seem sort of right to us. But also, if we were going to bring Rust and scale it up, we had to stop funneling decisions through the same two or three or four people. We had to really have a lot of people moving so we could do things independently from one another. And in that time, we've grown the teams a lot. So what started with a relatively small group. This is a chart prepared by David Toulnet, wherever he is, that shows the kind of growth over the time. And you can see, not only did we grow a lot, we grew a lot lately in the last chunk of six months or so, 10 months. And the reason is because we've been putting a lot of focus on that. This is not an accident. It's been a deliberate effort of us to grow and expand the teams precisely so that we can scale better. So hold on. Ashley, talk. I know. This was our good handoff. Yeah, that was awesome. So I have been giving a fair amount of conference talks, I guess, for a while, but lately I've been really hung up on this idea of what do programming languages want. And the reason I like focusing on this is because it turns out that there's a lot of effort that's put into all of these things. But rarely do we step back and ask ourselves, what are we building and why? And so I've asked this question kind of in general, but I love to ask it about Rust. And so Rust has always kind of been about this idea of technical excellence. We say, all right, there are these trade-offs in computer science, but what if they didn't have to be a trade-off? What if we could get all of the things, our classic pick three slogan? Like we can do it all. And that means that we're very strongly, like, really heavily computer science-based. And you know what they say about computer science. Right, right. And so we always talk about the hardest problems in computer science, but I'm a strong proponent of the idea that the real hardest problem in computer science is people. They are by far the most complex distributed system that exists. And I will fight you on that topic because I have lived this personally. And if you've ever done any work with people, which potentially as a computer scientist, maybe you've been trying to avoid it, people. It's pretty tough. People are a really incredibly difficult problem, and it's something that we're working on. But fundamentally, Rust has also always been a programming language that's focused on empowerment. Not only do we want to make a technically excellent product, but our goal is to take systems programming and widen the audience that can actually use it. And that's what a lot of our work on the productivity and ergonomic stuff has been, and that's been really important. And it turns out to get that stuff right, you really need people, both people building it and people using it and people giving you feedback. And we're going to be talking about feedback giving in a second. But luckily for us, we're Rust, and we have a lot of amazing people. Raise your hand here if you're on a Rust team. All right, give those people, come on, it's time for the... All right, so from that point on, Nico is now going to start talking about growing teams. Thank you for that very well performed handoff, Ashley. They're only going to get smoother. So yeah, I'm going to talk about where all these people on the Rust team, how did they all come to be here? And what are the things...I showed you that graph that's been climbing lately. What are the things we've been doing to make that growth happen? And I think these lessons apply, they apply to Rust, but they apply to any sort of open source project and probably not an open source project that you're trying to build in the end. And so there's been this, I think the sort of simplest...the way a lot of us start, I think, when you come to open source is this idea of there's all these people writing code and things kind of just appear out of nowhere and great things. And sometimes that's true. So like Berns-Sushi came with this regex crate and none of us were...we just kind of...we knew we needed to do regular expressions at some point in 2014, but it wasn't like a top priority item and there it was and it was great. There's kind of this serendipity aspect where you've got these really nice surprises and they just happen. But sometimes surprises are not always good. Sometimes you get a PR and you're kind of like, yeah, that's not the way I had in mind. That's not quite what I was thinking of. And sometimes you end up...if you just kind of let it happen any which way, you end up with this kind of cacophonous effect where everybody's kind of running with the project in whatever direction they had in mind, but it may not be forming a coherent whole. And by the way, this is a poster from my old guitar teacher and that's me over there in the corner. And he's great if you're in Boston. You should check out Sam Davis. Okay. So the thing is, there's another side of this too, which is that serendipity is sometimes really excellent, but it's also...there are a lot of people who don't want to participate that way, who don't want to just pick up a project with like no docs and no idea what's going on and just open a PR. And so you're losing a large portion of the people who might be involved in your project if you just leave it up to chance. So what we would rather do is approach it in this kind of deliberate, on-purpose, strategic way. And that's really widen the set of people who are bringing in, who are participating. And actually the first step is kind of non-obvious, although it sounds obvious, which is that you should ask people to help. And the reason that's not obvious is a lot of times we feel like we're the only ones who can do it. I'm the only one who understands this project. I'm the only one who knows how this problem works or I can do it the best. And that might be true. You might be the fastest, but you're probably not the only one who can do it. And so when you ask, it's also important to think about kind of how you ask. If you just kind of open an issue that says, well, I need some help, you're kind of shouting into the void a lot of times. But if you can give people some instructions, some steps, that can be a much better. So like one popular thing for getting a lot of work done, there's a lot of repetitive work to do, is to make what we call a quest issue, which is like a checklist and some instructions for how to do an item. And people can come in when they have a little time and just tack off one or two. They don't have to do the whole thing. This is a much easier way to get involved in the project than having to stake out something all on your own. And in general, I guess there's just this need to be building a space for people to participate, thinking about what it feels like to get involved and making sure that there's a place for them to move into. So sometimes that might mean doing a little bit of the work, enough that people can see the shape of what you want to do, but not finishing it. So this first PR that I opened some time ago to add mirror to the compiler, which is the intermediate representation we're using. I left various holes, right? So it didn't really work. It only handled this print. You could add two numbers. That's it. But it was enough that people can get in and see what I had in mind. And the final product looked very different than that skeleton, but that's okay, right? That wasn't the point. And in general, I would say just moving your planning and moving your operations out of your head and into public spheres where people can give you feedback and just and be involved. And that will also help them to follow along with what you have in mind. It gives you a chance to sort of articulate your vision for how this should work and make that known. So Rust does this at a lot of levels. This is part of the role of the roadmap at the highest level, sort of setting the direction of the project on a yearly basis, which by the way, we're going to be starting soon, I think, in the next year roadmap process. And kind of going on down at different scales. And the final step is you have to tell people how great they're doing, but not only tell them, it's not enough, I think, really, to just say you're the best. Ideally they should be also gaining, as they get more experienced, you should be starting to share some of the power. They're probably going to have ideas that may not be just the ones that you had, and that's okay. A lot of times those ideas are actually really good, and they can be kind of taking the project in directions that weren't what you originally had in mind, but were actually better. So one simple example that I remember is just because it made an impression on me was Eddie B. was at that time, he's now a pretty well-known Rust contributor, but at that time he was relatively new. And I remember we were planning to add these D-Ref traits to the compiler, and I thought, well, I guess I'm going to roll up my sleeves and do that in a few weeks. Of course, like two days later Eddie B. had a PR open because he's really fast. And when I was looking through it, I realized this isn't how I was going to do it. It's actually much better than the way I was going to do it, and this patch was much cleaner and smaller. And so we ended up taking it, and it was great. So sometimes things kind of don't go the way you expect, but that's all right. Oh, no. So now Ashley's going to talk. So Nico's talked a little bit about the practical elements of things that you can do to help grow your project, get more people involved, and make sure that they want to stay. So what I'd like to do in a classic Ashley sense is get a little bit more philosophical and talk about some of the underlining values and assumptions that are in all of these types of actions. And so the two key components that I'm going to talk about right now are the ideas of pluralism and positive sense. And so I just Googled the definition of pluralism as one does. And so pluralism is a conditioner system in which two or more states, groups, principles, et cetera, can coexist. And so with that idea of having all of these different ideas together, all of these different types of people, a diverse set of ideas coming together, then we have to have some sort of game plan, right? And so here's the classic meme. It's time for some game theory. Yeah. So in general, when people tend to think about these things, they often think in a zero sum game. And so in a zero sum game, basically a participant's gain or loss is exactly balanced by the losses of gains of another participant in the game. And fundamentally, we want to reject this idea. And so you can think of this as your gain is my loss. And if you need any sort of example, you can open the orange website or maybe Reddit. And you're going to see right away that a lot of the infuriating statements and energy that's happening there is because people fundamentally believe that in order to get their idea, someone else's idea has to fail. Theirs needs to be correct, and there needs to be another one that's failing. And one of the principal ideas of Rust organizational structure is that that's simply not the case. So we believe in something that's called a positive sum game, which is a situation in which the total gains and losses in a situation is greater than zero. So that means that my gain doesn't necessarily have to mean your loss and that I can have gains. We did not schedule those. Anyways, the idea is that in order for me to get something that I want, it does not mean that someone else somewhere needs to lose something. And while that seems like to be a somewhat simple concept, it's incredibly difficult to actually really think and believe in this way, and in particular to run with a large set of people with a lot of ideas and still genuinely believe in this idea. But in the end, it's a little kumbaya, right? And we believe that the whole is greater than the sum of its parts, which is to say we have all of these people working in Rust, and Rust is better than adding them all up together because we are working together, and that working together is actually creating value on its own. Now this is kind of a gamble. A lot of systems don't work this way, and I have to imagine that people in this room have been parts of organizations where we were not believing in that and probably experienced something entirely different, something maybe much more adversarial. But we can kind of think of Rust in a certain way as Captain Planet, right, where we have a lot of different audiences in Rust. So we have people who are just working on Rust, just that, and maybe C++ developers coming in too to share their perspectives. We have some JavaScript people in there. JavaScript is awesome, don't need JavaScript. Also academics and also just brand new developers. And yes, some people learn to develop by writing Rust, which I think is amazing. So we have all of these different groups of people coming together, and maybe we can form Captain Rust, and that's going to be awesome. However, as I said, this is kind of a work in progress, and sometimes it ends up like this. Yeah, I've been there, and probably you have too. So what we're trying to do is this, right? It's like, okay, we want truly open consensus seeking, and we're asking, can we scale it? At the beginning, right, it was just a couple of people. And as you grow and grow and grow, you kind of have to keep asking yourselves, you know, are we web scale? Yeah, also the trick with this meme, I guess, is that when he turns that blender on, our ideals go, get all crunched up. I didn't think it through. But anyways, the truth of the matter is, is I spend a lot of time having doubts that these fundamental values and assumptions we have are going to work. And a lot of our leadership also kind of has these feelings, but I do genuinely want to believe. And what we're seeing now is there are a couple of instances and kind of patterns we're seeing that are growing pains, the scaling pains of this truly open consensus. And so these are some ideas from Aaron Terrans, what I call his feelings blog, where he blogs about management. If you don't know about it, you should absolutely check it out, because it's brilliant. But a couple of the issues we've run into, first is this idea of momentum, which is it's hard to shift whatever sentiment is first established. I was once told when I started writing RFCs that RFCs don't usually ever survive their first comment, which is to say, whatever that first comment on that RFC is going to be, that's going to set the tone for the whole conversation. So hope and pray, or maybe ping someone and be like, please comment first, please. I want this to work out. But that can be really tricky. And so we've seen this happen on a whole bunch of RFCs. And it's worth noting that if you find yourself in a conversation like this, scroll up, see, was this entire maybe sidetracked part of the conversation just started by a first potentially aggressive comment? It's definitely possible. The next one is just fundamentally that the stakes are high. So people are feeling incredibly urgent. Not only are they commenting a lot, but they're commenting fast. And I think it's very interesting in Rust, because we all desperately want Rust to succeed, that we kind of feel like we have to race to that success. And so because everything feels like it matters so much, we're going incredibly quickly. And it just turns out that giving everyone the benefit of the doubt and caring about people is really hard if you're all racing at light speed to try and get something done. That speed really accentuates a lot of the problems with the organizational structures that we have. And then finally, and how many people in here sometimes feel really tired? Yeah, me too. And part of it is a result of this attempt to bring so many people into this project. It's genuinely hard to participate and often doesn't feel like progress is being made because there is so much happening in so many people. And while these three things are things that we are struggling with right now and patterns that we're finding, we're really interested in trying to find ways to fix them. But they are the things that we struggle with. So a lot of these things end up happening because fundamentally when people respond to change, they don't respond logically, despite that we all probably are really logical, like we really like computers, we do it logical. Finally we are responding emotionally, kind of like this. Like oh, let's remove mod keyword. He's pissed about it. He also hates lettuce. I think he's just hungry. I don't know. And that's a sentiment that we honestly see a lot. And so this is kind of a vague summary of a recent RFC process that went down where it just was basically like people felt luckily enough, luckily enough of us yelled to stop the terrifying original proposal from happening, the moment we stop speaking up, those people will start put, that mic will start feedback again. Those people will start pushing in that direction again. Like fundamentally it's this idea of wielding power versus changing minds. And often because of those three things that I was bringing up, those three patterns we see, people will reach for that hammer and being like I want to take my power and make it work. This feedback sucks. The idea of wanting to take some power instead of trying to do the work of changing minds. I used to be a middle school science teacher and one of my teacher buddies told me, Ashley, you need to remember, it's Tai Chi, not karate. And it works with teaching kids, it also happens to work with trying to do organizational change. And so we often, if we need a visual here, you might kind of feel as though the core team is that big cat swatting down that little cat. We're the people in power and if you don't keep fighting us, we're just going to do what we want to do. And interestingly enough, it kind of works both ways because sometimes the core team feels like the little cat and we feel like the community is the big cat saying we have a lot of time and energy and we're not going to let this happen. But that's really not how we should be looking at it. We really should be kind of looking at it like this. The idea is to try and see it from other people's perspectives. And again, a lot of this might seem like banal advice, but it's shockingly difficult and rarely seen in practice. And so one of the fun things about Rust is kind of as Nico said, is that we've seen so many RFCs filed maybe by core team members where people from the community will show up and come up with a really amazing idea that we didn't even think of. And it's just like, that's freaking awesome. So it's not always this adversarial situation. But yeah, you could change our perspective here as Nico said, maybe he just is hungry, didn't get a good night's sleep, it's possible. All right, so I put this really cute dog with a pumpkin here because it's about to get real. I'm going to talk about feelings for a second because I think it's important. So recently, how many people hear you so? I was slow. All right, so how many people hear you use Crate's I.O.? All right. So recently, Crate's I.O. had an interesting operational incident. This is a screen cap from me filling out our status page report. And what the message says is we are currently seeing intermittent performance issues. We believe we have identified the cause as malicious end user behavior and have taken actions to address it. So we've just here been talking about all this work of bringing people in and sometimes some not great things happen. So this is a screenshot from a different orange website than the classic one, but still orange. And it's talking about an issue that I know is very important to a lot of people here in this room. And the first thing I want to say is I am not here to diminish your points of view and your opinions. Your care is literally why we include you in everything. However, I saw a lot of comments on this, some I wish I hadn't seen, but one of the comments really stood out to me. And it's a comment here that you can see that has 122 upvotes. So it must have rang a bell with people. And it says, the reason that crates I O squatting problems are becoming ridiculous, and it's implied here about probably a lack of action, is because the core Rust developers believe in the inherent goodness of people. Bless. So the trick about this is I think, I mean it's hard to know what people really mean, but I got the sense that maybe this was sarcastic. And what I want to say is it struck me because fundamentally, like everything we've been talking about in this talk is we like really do. We really do genuinely believe in the goodness and value of the people in our community. And to see someone kind of sarcastically throw that at us almost like an attack was like, wow. Like, no, this is like a fundamental value of Rust. I'm not sure how you missed it, but this is the whole thing. And in fact, to a certain extent, in order to try and accomplish what we want to accomplish, particularly with our goals of being an empowering technology, we have to believe that. If we didn't, what's the point? We couldn't accomplish any of this. There would be absolutely no point. And so when I read comments, this is the face I make. It's true. I kind of permanently just am like sad, grumpy on the computer reading comments, but I think a part of being a leader in an open source project that you don't realize is that, well, we might kind of be kind of grouchy or maybe cynical sometimes. Our primary job is this. We read all those comments. And instead of really getting upset, what we actually do is go, what have we done wrong? What is it that we have done wrong that has caused this? We have not created the right pathways for communication. We have not properly communicated our values. We have not opened up processes the way we should have. And so when we see behavior like this, it really affects us. And it's not to be whiny and stuff, but sometimes the amount of emotional labor that we are doing on a regular basis can cause a whole different type of fatigue than the one I was talking about with just, oh my gosh, there's so much. Trying to see the benefit of everything and be positive with people can be really tough and for real. So this was another comment. And it said this. It said, personally, I'm not a fan of any statement containing the words, it cannot be done or it's equivalent. Yes, it can. You just don't want to, which can be okay, perhaps. I empathize with this a lot. I know where it's coming from, but at a certain point, one of the things you have to realize is everyone is just humans. And if people think that everyone in the core team gets what they want, oh my goodness. It's just not true. And fundamentally, sometimes I really can't. A lot of the things that I want to get done, I just really can't. And neither can the core team. There's a lot on our wish list that we would absolutely love to get done and we're kind of bound by the fact that we only have so much time. And while this could start sounding whiny, I don't mean it that way because we're also here to say that sometimes we can't and sometimes that's our fault. So this is another comment. It says, however, as mentioned, even by the Reddit mods, the topic of squatting comes up once a week. It is a problem that needs to be discussed and taken seriously and several members of the Rust community as seen in this thread and in numerous Reddit threads feel that it has not been taken seriously and repeatedly been dismissed. And it's true. We have messed up stuff. We are not perfect communicators and fundamentally it's on us to change that. And so this talk is not only just us talking about the tactics that we use to try and grow the project, but it's also admitting that we are still working on it and we still haven't gotten there yet. So despite the fact that we kind of make jokes about online comments or RFCs that get 300 comments, we don't believe it's all the community's fault. And there's a lot of work for both the community but also leadership to make this happen. I have a mic here. So, yeah, this actually is a theme that's come up for us numerous times. We actually have a saying about it, which we call the core team must but the core team can't. And it's a particular pathology that I think can arise where you have, the quote was not actually from Florian Skade, but we sort of arrived at it from some emails that he sent to us. And the problem was what can happen is that you can see something is very important, so important that you feel like we should really deal with it ourselves, but we don't actually have time to deal with it, so it just doesn't get dealt with. And then there's like no, no, and you don't provide a way for someone who is motivated to have a road in and pick it up and run with it. And so the secret, the sort of secret I wanted to tell you from the first part where I kind of showed you all these strategies for building your open source project is that if you succeed, the nature of that work is going to change. And sometimes that might not be what you had in mind. You started out writing code and now there's like 20 people trying to contribute and you have to like work with them and build consensus and manage this project and it's a different kind of work. And it's a kind of work that often isn't quite as valued or as recognized, I think, in the open source community. And so there aren't as many people who sort of have those skills or who, you don't necessarily think that they should, they could use those skills as part of an open source project, right? And so I'm here to say you can, if you'd like to organize, you should talk to us. But also that it's really important. And there's this other part that's sort of a little more insidious where sometimes you were the one making the decisions. Everything was going just the way you had in mind. And now, you know, somebody else is coming with their suggestions and maybe they're even better than yours and maybe you even recognize that, but that doesn't mean you like it, right? That definitely happens from time to time. So yeah, that's kind of the... So the theme of this talk is this idea of being deliberate and because I am insufferable and pedantic, I Google the definition and there's a couple of really interesting ones in here. So one of the primary things of being deliberate is by doing stuff, you know, kind of like a full consciousness of the nature and effects or trying to be intentional. And that's certainly something that in the Rust team we're trying to do. But the one that I think is the most important and probably the most controversial is this last one here, which says, unhurried in action, movement or manner as if trying to avoid error. See synonyms at slow. So this is only my personal opinion, but I have talked to many people in leadership and other parts of the organization. And we heard ourselves talking about this urgency and fatigue in Rust. And my kind of claim for Rust 2019 is let's go slow. I think a lot of the things that we have succeeded with have happened because we moved fast, but we moved fast at a cost. And I'm not sure I'm super willing to incur that cost anymore. And I don't think that the trade off for that cost is as worth it these days. And so I'm really curious to see how we can build capacity over the next year so that maybe at some point we can start moving super fast again. But I think there's an opportunity here to slow down and think about what we're doing. And maybe we can do just a little bit less of this. So my call is that I'd like to see us all be deliberate together. Not only leadership, but also the community. Oops, I didn't switch my slide. Because in the end, that is what is going to bring Rust to the success that we all really want to see it have. So to bring it back way to the beginning, we're at the addition. We've now covered where we are, our trajectory. So what do we want to see over the next addition, over the next three years, let's say? I think we've proven by now that you don't need a GC for memory safety. And I think what we're trying to prove is that you don't need a dictator to have a good language. You can do it as a community-driven process. That was your cue. And we have fearless concurrency today. We would like to get more of this concurrency, but amongst our teams, so that we can go fast, but without incurring some of the costs. And I think most importantly, we have thread safety, right? But let's try and get some thread safety on the RFCs and internals, maybe. All right. So to kind of bring it to a close, I'm going to soon be doing a track host for QCon on the 21st century languages. And it turns out that the vast majority of modern languages have picked a governance model that is almost exactly the opposite of what we are doing in Rust. And something that's really amazing is that we have this great opportunity to prove that a modern language not only can, but absolutely should run as a radically open, pluralist project. And the best part about it is that we are all going to have to do that together. And the opportunity to prove that that's an option is right in front of us. That's the work that we get to do. And I think that it's going to be incredibly exciting when we actually succeed, because I believe we can. So finally, I'd like to say please join us. You can join teams if you did not know that. Reach out. We want you all to be involved. Thanks. Thank you guys Теперь, and Hail coats of bread for those of you with hearing galleries.
Core team members update everyone on This Year in Rust!
10.5446/52169 (DOI)
As she said, I'm Simon Heath. I am talking about evolving API design in Rust. I've been using Rust pretty much since 1.0, and I'm interested in programming languages, and I'm interested in compilers, and I'm building infrastructure, and making video games. So I took these things and put them together, and made a game engine because it's more fun than making a game, called GGEZ. And this is the first major Rust project that I had been working on, and it's also the first major open source project that I've worked on. And the goal is to make it easy to make 2D games, because this was something I wanted to do. It's a good way to learn Rust. And whenever someone says, I need an idea for a project to learn Rust with, I can say, you should make a game. It is based on a Lua game framework called Love 2D, which also has a similar goal of making 2D games easily, which is to say that I went through the Love 2D API docs, function by function, and wrote everything in Rust. Because I didn't really have a great plan for how to make a good 2D game engine. I just wanted something that was simple and would work and was easy to use. So I knew Love was simple and worked, and I'd used it before, so there we go. GGEZ is actually used in a few games made by real people who are not me, which is awesome. It's nothing so far. There's anything super huge or complicated besides Xemaroth, which is this one which has been worked on since before I started working on Rust. But hopefully someday, I'll actually get to write games in GGEZ as well. That's my goal for 2019. So to do all this stuff, GGEZ brings together a lot of other crates from the Rust ecosystem and has to take all of these libraries and make them play nice with each other and convince the ones that don't have cool logos to make cool logos, so I can put them on the next slide. It has to take these crates and have them interoperate. It has to take whatever API they expose and be able to wrap it up in a consistent way and make it easy to use for Rust newbies. And it has to actually be able to use all of these crates successfully. And so I have gotten to be very good friends with some of the maintainers of these crates because I would keep submitting bug reports or I would keep saying, I need to be able to do x. How do I do x? And usually they tell me, and life is good. But I wanted to do this talk because I also hang out on Reddit and IRC a lot and probably too much. And I keep seeing things like this. So everyone who first learns Rust and writes something big has to ask for advice on it. And so it was weird because I don't see this a lot in Python or C sharp or whatever. Maybe the world would be a better place if people did do this in those languages. But either way, people who learn Rust seem to have trouble figuring out how to write Rust APIs. Or at least they have anxiety about writing Rust APIs. They keep asking, how do I write idiomatic Rust? And so that's what I want to talk about. So how do we design a good API in Rust? In my case, I didn't have to design an API. I just copied an API and made it Rusty. So that was, let's take a look at the API I copied it a little bit. But I'm going to start with some GGEZ examples. And then I'm going to look at some of the other crates that GGEZ uses and how those sort of look from an end user perspective and what's good about them and what's bad about them. So here is a very simple love 2D game. It's all in Lua. I don't know how many people out there know Lua. But we have some functions. We have load, update, and draw, which are the sort of fundamental parts of your game. We have a global player, which is just a table, dict essentially. That's my Python showing through. And we have update, detects if you are pressing buttons and changes the world state if you are, and draw, draw stuff. And great. And these are basically callbacks that are loaded by the Lua interpreter. And love basically has a version of the interpreter that is built with a bunch of libraries and looks for these callbacks and loads them and runs your game. So this is pretty different from how Rust works. I mean, there's no curly braces at all. But we have these sort of magic callbacks that the interpreter looks for. It's dynamically typed. There's mutable state everywhere. So it's kind of, I didn't even know if it was going to be possible to make this in Rust. I was like, well, maybe. I mean, look at just the draw, the love2D draw function. We have four different overloads for the draw function. I like your expression responding to this. We have some drawable object, or you can replace the drawable object with a texture and a quad, saying which part of it to draw. You can have a transform, which is a structure that basically bundles up all of the drawing parameters. Or you can just list all the drawing parameters for that are possible individually. And it also turns out that you can actually omit the ones at the end. And they'll just default to 0 or 1 or whatever is appropriate. And so you can just leave all of those off and just have x and y and r. And it works fine. So I was kind of like looking at this and saying, well, I just want to make something work. So worst case, I'll just make a separate function for each of these variants. And so I started with that, and I sort of squished it together and got something like this. So we have a struct that has all the draw parameters in it that you can have. And we have a function that takes a drawable object and drawparams and a context which just holds on to the graphics context state and appears everywhere. And it draws it based on whatever parameters you give it. And then, well, OK, we have a simplified function that just takes a destination point under a rotation. And you can use that if that's all you need. And if you need the full power function, then you can use that instead. It was like, OK, fine. Eventually, I discovered that the default trait and the struct update syntax exist. And you can do something like this, which actually is halfway decent. It's not great. It's not terrible. But it works. It's not too pretty. I never really liked it. But something that I realized as time went on is that GGEZ is an opinionated framework. And so lots of people have opinions about it whenever they try to use it. It's actually fairly low level. It doesn't provide animations. It doesn't provide a physics engine. But, and so everyone says, oh, why don't you do it this way? Or why don't you add this? Or why don't you not add that? But nobody in the last two and a half years or whatever has actually complained about this horrible hack. It's not a problem. It actually has quite a few advantages. It's simple. Even the most basic Rust programmer can understand it. It's completely obvious what's going on and where the data is coming from, where it's going, and where it's used. And with this nice syntax, it's even not too terrible. So it works. So the harder case was dealing with Love2D's sort of callback structure. And I ended up starting with something like this, where we had a trait called GameState. And I should have cut out all of the inconvenient code. But it provides load, update, and draw methods that are just like Lua's or Love2D's load, update, and draw methods. And then down here, you have this game struct that is generic on the type that you implemented the GameState trait for. And it just creates your GameState by calling the load method. And then it has an event loop inside it that calls update and draw and takes keyboard events and all that stuff. And so this is the closest I could get to something that looked like Love2D, where it just had these magic callbacks that did everything for you. And it sucked. Nobody hated it. Or nobody liked it. I didn't like it. I got tons of questions like, oh, how does the GameState actually get created? Like, where do I put the new method? How do I? Like, who owns it? It's owned by the game type. How does it know what type to load? Well, you have to annotate it with that generic parameter. And like, where does the context get created? Well, it gets created in the game. And it was just complicated and nasty. And eventually, I wanted to be able to even take apart the event loop and let the user sort of write their own with their own update functionality if they really wanted to, because Love2D does allow you to do that. So eventually, I ended up with this, which is kind of similar. We have an event handler trait. It defines update and draw methods. But the GameState is just a struct you create. And there's nothing special about it. You create a context, which is sort of the handle to all the GTEZ library functions. Like, it handles the sound state, the window state. It talks to the operating system, et cetera. And you have this event.run, or colon run function, which is literally what you would just write. It's a wow loop that pulls the operating system for events, calls event handler update, and then calls event handler draw. And it just does that. And now, by trying to do less magic, everything becomes way better. And so, and I didn't like, it was like, this doesn't really look rusty. This is kind of dumb as a pile of bricks. But how do we design a good API and Rust? Everything, a good API is still a good API and Rust. That was the sort of what I figured out. Love2D started off with a pretty good API. I turned it into Rust, and it's still a good API. Rust doesn't add or remove anything too magical from it. And when it down, keep it simple. I like the term complexity budget that the last talk used. Because putting my complex budget into trying to make it look exactly like a Lua API wasn't worth the extra complexity. So another problem that I deal, often when people are trying to get into Rust game dev, is that some popular crates are very hairy. So they say, well, what are my options for drawing graphics? And someone mentions GFXRS, which is awesome, and which Love2D uses. And they go to the docs, and they see this. Now, I don't know about you. Usually I call it quits after only seven or eight associated types. Here we have 12. Or they say, OK, well, how do I do matrix math? Well, look at an algebra. And they see this. This is the matrix type. This is what's even going on here. It's almost impossible to read. When I see stuff like this, I call it trait salad. Because it's just a pile of different stuff all mixed up, and you can't make heads or tails of it. But if you're trying to learn Rust, and you come at this, and this is what you see, then obviously the creator is this guy, who either is just a mentat, who is one with the computer, and knows everything there is to know about everything, or he's just a sadist, and he likes torturing the users of his crates. I mean, let's look at, we saw his math framework. Let's look at his physics engine. This actually looks kind of reasonable. I mean, it's just a bunch of methods. This is part of the collider type, I think. And it's pretty obvious what's supposed to connect. So it's not that he's a sadist, and it's not that he is operating on some plane beyond human comprehension. So what's actually going on here? We have a bunch of traits. They all have really complicated bounds. We have one here called abstract magma. I usually think of this when I see that, or maybe this. However, it turns out that what I should have been thinking of is this. It's a math term. And so what an algebra is doing is teaching the Rust compiler how to do fundamental math by encoding it in the type system. And that's not something that would have ever occurred to me to do, but it also means that it completely rules out a lot of math errors at compile time. If you have a transform vector and a scale vector, you can't add them together. The type checker catches it. And GFX is similar. It's doing something kind of like that, trying to encode the state of a graphics card in the type system, which is really complicated and low level, but it ends up catching a lot of bugs. So these aren't bad APIs. They're just very sophisticated and specific and low level and are geared towards a certain type of use case. I've talked to people who have a formal math background and use an algebra, and they love it. They don't see this magma, but all of this pops out at them and they're like, oh, I know how everything fits together. So the next lesson is know your audience. Who do you want to be making these crates for? What do they want to be doing, and how do you make what they want to do easy? If you're making something for yourself and you have a background in math, then you end up with an algebra. And if you don't, then you end up with something else. So also, as an audience, know your tools. It's a lot easier to understand why an API is the way it is and where to look when you need some functionality when you know who the writer is making it for. Maybe it's not for you. Maybe it's something you can learn, but either way. So we've discovered that API design in Rust in hard is hard. It's hard in any language. Rust seems to make it trickier, though. Why do people get such anxiety about it? And what can we do to make it better? Well, in Rust, API misfeatures are actually really nasty. This is true in any language, like Java or Python or whatever. But Rust is good at making things subtly terrible in ways that aren't obvious to people who don't know Rust very well. For instance, GGEZ has a submodule for loading resources from file paths without keeping the files in some platform-specific location that's different on Windows, on Mac OS X, on whatever. And so it uses a create to just ask the operating system what paths it should use. And the create is called appters, and it works basically like this. You create an app infostruct, which has the name of the app in author. And the operating system has some specific location based off of this information that it uses to store images or fonts or whatever. And you just say, OK, get me the user config directory for whatever operating system I'm running on. And it gives you a path. And that's it. So the API is this. There's more than this, but this is the part that GGEZ uses the most. And so we have a struct, and we have a method, and the struct has some static strings, and everything is good. And so I wanted at one point to write a program that loaded these paths from a config file or something. And so I had something like this. We have author and app name, which are our own strings. And we feed them into app info, which has static string slices, except these are own strings, but these are static string slices. You can't get there from here. How do you do it? Well, you can probably figure it like you go on IRC and you ask, how do you turn a own string into a static string slice? And someone will say, well, you can probably do it with unsafe, but we don't like doing that, because the point of unsafe is that you never need to use it. It's there in case it's like the shotgun on the wall in your dad's room. It never leaves the wall. It's there in case you need it. You just never need it. OK, so we'll fix that somehow. Let's just get on with the file system code. I want to be able to write something like this, where I have several virtual file systems of which one type will be able to load things from the disk, just from the normal file system. And one will load things from a zip file and pretend it's a file system that's been overlaid on top of it. And this is handy for games, because especially if you want a game that's moddable or something, you can have all the formal game resources in one zip file, and then you can have a mod that just replaces a few of them, either in a zip file or in a directory. And it's nice. And I wanted to just have the file system be a trait object that exposed a few methods. And we just have a vec, and we just look through each of them in turn and use the first one that has a file. Great. So I found a crate that looks like it does this. And it has a trait like this. And it's called VFS. And so we have the path which this crate is. The path uses the path as the entry point kind of, like you use path, and then you open a file, and it's very object oriented. But it works OK. And then we have an associated file and metadata type, and then you create a path through this method on the trait. And it takes something that can be turned into a string and gives you the path type for that trait. Well, hang on. I wanted trait objects. And this has a generic parameter with a trait bound, but you can't turn that into trait objects because a trait object needs a v table. And the compiler will make one version of the path method for each type that you specialize it with. And so you can't make a trait object from that. The compiler would have to be able to look into the future and see what types of T you would use it for and compile those as well. So you end up with these situations where perfectly reasonable explanations or perfectly reasonable design decisions just get into weird places. And it's impossible to do anything about it. And so these are the easy cases. If you try to use GFXRS or Tokyo and you end up in one of these weird corner cases, then even Rusty can't figure out what's going on and tell you what you're trying to do incorrectly. And so you just end up with these weird situations that aren't obvious, or at least aren't obvious if you're not looking for them, where you want to do something that the create author didn't think of you trying to do, and Rust isn't allowing you to do it. So they made some design decision that had unexpected consequences. Unfortunately, I don't really have a great answer to this one, besides iterate, because people will always come up with interesting use cases that you never thought of. I've spent the last two weeks with some trying to get GGEZ to work on iOS because someone really wanted it to, and I never really considered doing that before. So your users will always come up with something that you didn't expect, and the nice thing is it's not a terrible thing to redesign an API in Rust, at least for small things, definitely, because it's Rust. If it compiles, it will probably work. And it's very hard to break things by accident. So have people actually use your crates, use your own dog food, but also make sure you share your dog food around and have other people taste it and see if it is to their palette. Rust is great because you have cargo. Everyone tends to use semantic versioning, and it has good encapsulation. And so if you make an update to your crate, like, future or something, and people don't like it, that's fine. They just use the old version, and it's no problem. So everything you know about API design still works in Rust. If you can make a function that takes borrowed types and returns the result when necessary, then you're writing idiomatic Rust code. It doesn't matter how non-fancy it is. You don't need the fancieness. The fancieness is there when you need it, but most of the time you don't. Always make sure that you have some idea of who you're writing this crate for, just so when the next interesting feature pops into your head, and you say, oh, I wonder if I could do it, if it could do x, you stop the second and say, well, does x really serve my purpose or not? And more importantly, if I write x, how do I explain it to someone who's trying to use it? And iterate and keep working and keep writing code. So thank you. Thanks.
As Rust is a young language containing many innovative features, questions about how to structure Rust libraries and API's are common. Heavy use of metaprogramming and trait constraints can make libraries hard to understand and use, but also bring great power to reason about programs at compile-time. How do you design a library that exploits the power of Rust without making new users say 'This is way too complicated to bother with'? We will discuss these issues using an existing Rust crate as a case study in both designing an API and how it is influenced by the API decisions of the crates it uses as dependencies.
10.5446/52008 (DOI)
And welcome back to the next talk of SolarPunk 2077. We just heard the lovely Radio Cosmica coming from Mexico talking about how to heal through radio waves. And that's, I guess, a good intro into what's coming next. I would like to welcome my friend, Maeyong Feng, to today. He's going to be talking about, hello Maeyong Feng, guerrilla living syndrome, or the Adelkis Hospital, as you also call it sometimes, sharing a bit about the imagination and other forms of living together in this planet. Maeyong Feng, can you please introduce yourself? Okay, hello. My name is Maeyong Feng. I'm an artist and curator based in Berlin, but I was living in China for many, most of the time in Beijing, and Shenzhen and some other cities. But I'm doing a lot of different projects. It's kind of non-traditional or non-conventional art project. Yes, today I'd like to, actually I'd like to come to do the face-to-face interview, but I have to do the self-courting today, because I have been outside of Berlin for several days. So that's, today I'd like to introduce the project. It was initiated by Forget Art in 2011 and 2012, because the project named the Guerrilla Living Syndrome. It's like, you know, because in tradition China, especially in the faster developing big cities in China like Shanghai, Beijing, and Shenzhen and Guangzhou, the many young people live there and to work there. But the government urges the young people to try to buy property according to their, some traditions and concepts, or some, you know, ideologies. And the people think that if you become successful men in China, you should buy some properties and buy some apartment. It's not like, you know, in Germany, in Berlin, you don't need to buy some property, you just run to some, you know, it's after the communist China, it's become very harsh and very tough and very deep and capitalism in China. Actually, it's a mixture of the, you know, the neoliberal capitalism. And also, I think the government tried to ask the young people to buy property because they want this young people, they don't, you know, they try to occupy their life with money, with property, with cars, with luxuries, with entertainment, and it makes them have no energy to do any kind of resistance, and you know, some other, and you know, very, very different idea or something. You can't just, because you have a lot of, if you have a lot of burden, burden, you have to live burden, living burden or some pressure, you don't have any energy to think how the community you are living, you know, that's why in 2011, I have talked, I have talked with some artists, the architecture and some anarchists and also activists. We want to do some projects about this, you know, this is some very productive and very, you know, people living in this, and try to against the very traditional and the conventional, conventional idea about how to live in the big city or in the, it's also like today, we are trying to many, many new anarchists to try to reclaim the night works, but to reclaim the, you know, the online media. But on the other hand, we have to, you know, reclaim the tradition space, because the many traditions space have been abandoned, like a street and factory, and you know, even the museum space, so, you know, a lot of space have been abandoned by the, you know, the new generation, because they spend too much time on the, you know, online, in some social media. So, this project to try to urgent the young paper to use their imagination, like this, like this, you can see this tricycle caravan, and with an iPhone there's some some some slogan from a French philosopher guide the ball. So, imagination at power actually I want to change a little bit. It's, I can, we can call it to empower your imagination. Today we need we need also need need to like an anarchist a real anarchist that we need to empower our imagination to make some very, very alternative way to to to leaving. So actually don't need that's the basic thinking and about, you know, how to make this project in the street or some in the square or some in the occupied place or some sort of a scottish, scottish place, and you can, you can create, you can use the imagination to create and make a remake and recreate to know some some some leaving structure or someone as a limb, or you can call, you can call it what, what you want. So, so we will we will we have asked, we have asked almost 40 or 50 artists and I and young, young artists and architects to make some project in the street or square and some other other places they like they can every every artist and architects they can use their, you know, their experience to create some, some, some, you know, thriller, leaving, leaving structure or leaving as a limb. Yeah, I remember one of the artists the one of the young artists make some easy to bring cool, it's, it's, it's like a desire, you know, cool, and he can easily to bring this. It's like a bag. It's a little tote bag but it's a little big, you can, and he can, he can, he could bring it anywhere and and open to very quickly and to live there. So it's very, very interesting there are many interesting works. Many interesting works appear in the streets, and also, you know, everywhere in the square and in the evening the in the in the forest, the many so. And this, this picture was this picture was created by the was recreated in in front of one of the museum art museum in Shanghai, it's and see a museum of contemporary art to mean contemporary art museum Shanghai, and the because after 70 or 80 years, this museum have, you know, commissioned us to recreate it. This project in front of museum. So I then I asked some paper together to make the, you know, it's like an empty camp in front of the museum. Also, we, we have a, we have a, we have brought one of the, and combined the project together with this, this camp together. It's, it's online podcast and the radio, it's uncut. It's also I initiated in 2012. And with the two other friends. And this, this online radio is, it's a dead to some social, social, socially engaged art social action programs and last year we have a interview some you know, Hong Kong, pro democracy activists in Berlin. And we have a interview last year. So we combined this radio together you can really with this electric camp together, you can see that the caravan, there's a big maximum maximum and what was installed on the on the caravan. And we can add the during the during the opening and and sometimes also do some, you know, grillers, grillers Shanghai, Shanghai radio. And I, I, I did a, you know, live interview with some, you know, some activists in Shanghai, and some anarchist, and inside of this caravan, or, or sometimes I use the recording material to, to broadcast in front of this square in front of this museum. And this this microphone actually is that it's according to my memories come from the people to commute, you know, in 70s. It's a very big, it's have a lot of a, I mean, the voice is very big, it can, it could spread a 500 meter away, or even 800 meters away. It's very, very, the voice very big. And big, and also the, the, the, the, the opposite of this, and the electric camp is the it's a school it's a primary school it's a meal. It's not it's a high school. Even the students from high school can can hear this, you know, voice, I can this program. Because, because, because the most of the most of the tracks are very sensitive in China, I have tried to build some platform in China, but it would ban the. I have deleted all of my tracks overnight. So, so I have to have to put this platform in the on SoundCloud. Now, you can check the, you can check the voice, or you can check all the tracks and archive from SoundCloud. SoundCloud, on cut, and also that this on cut radio, it's can, it's on, on motor is on cut, and on sensor and unlimited, you can talk whatever you want. We don't, we don't, we don't cut, we don't edit, we don't censor that any, any kind of any, any, any of the, you know, contents, you can, you can speak of whatever you want. So no one tried to, you know, to censor you or cut you out or manipulate you. So we combined this together. And so, because I haven't found some picture from, from, from 20, you know, from 2011. So I use this picture in front of me. But that's, that's, that's very urgent for, especially in China, because in China, China in 2030s, a lot of last, last century, the anarchy, anarchy, I mean, the Anakin thoughts was, was, was spreading very well in before the Marxism have, you know, have a, have appeared many, many activists, they believe in anarchists, but since the Chinese CCP, Chinese Communist Party, they took the power become the, they become a very, very, you know, the author terror, totalitarian government. So that's, that's why I, that's why I told some, some fans. Yeah, that's, that's just a big difference between the Marxist, Marxist and anarchist, because Marxist, if they have opportunity, they were, they would like to took the power then become totalitarian. And the anarchists, they don't want to take, to take some power. They want to empower people. They want to share the power to other people. So, I think, I think in China, people need to the more and more anarchist, you know, concept and thoughts to share. But you have to add the anarchist. Yeah, just, hello. Yes, sorry. So, going back to the last point you said is that basically you were saying that before the, the nationalist and communist revolution, the big revolutionary movement in China in the early 1900s was actually anarchism. Is that correct? Yeah, yeah, it's a very short time, just the 10 years I think from a, from, from 19, from 1918 to the, you know, to the 1928, maybe after that, most people believing the, or we should believe in the Marxism, because we need to take down the Qing dynasty and we, and yeah, because we want to build a strong country to, to, because we were, you know, because the communists, the activists, they asked the people, they let the people think, oh, we were, we were, you know, and we were, you know, exploited by the imperialist like, like, you know, the Great Britain or some, some, some colonial, colonial, colonial power, you know, so that's, that's, that's why the, they are, they, they try to let young people to believe in Marxism, and not anarchism, because anarchism is some, is more liberal, libertarian, more, you know, individual or sometimes. So, but today, some, in China, some, many people don't know, don't know anarchism, because not many media and not many platform have, you know, have tried to promote the anarchism in China, and people have a, you know, disquiet and try to, disquiet anarchist, oh, that's, they told the young people, it represents some, you know, chaos and un-studied society, and your life will be, and your life will be getting in flows and people, and these, these, the government try to use people, so you know, oh, everybody, everybody need to study, and you know, life, and you can, you can live a very good living, and anarchists represent some, you know, you know, violence, and, you know, vandalism or something, and they try to create, but it's not, it's not true, because it's not true. So that's why we try to, the artists try to, sometimes if you, I think the art, it's a very good way to take, to use and take advantage, contemporary art as a tool to supply some anarchism, it's easier to let people accept, because they, they can, they can, they could find some fun, and interesting things to share together, then they, they, you don't need to, actually, you don't need to tell them, oh, anarchism or something, which is to use, use some, you know, very practical, substantial action and, and the community to let people to believe in, we can do it, you know. So there are many kinds of, there are many kinds of, you know, practical ways, let people to think, oh, this is a very idea. This is what I really want to do. Like, like many, many artists, artists and young people, they actually, they want to do, they want to invent, they want to recreate their lives, not just leave the boring, you know, modern apartment or some, you know, some living with some burden, burden or pressure. Actually, they want to live an interesting life without burden, they want to have, you know, tribal and they make their things. So that's, and this art, art project, it's easy to let people to excited this. So that's why we, we, we try to do this project in the long term. So long term project from that to until now. So we're still doing this and also we're still doing the uncut podcast. But you know, it's kind of for sometimes it's a, it's a random and also sometimes it's improvised media, like uncut. We don't have a schedule to make an interview. So if we found, if we found some interesting person or interesting project we were, we, we would like to do that. So we combine so we combine these two projects together to make the camp and some, you know, some interviews and online streaming in front of the museum. But, but we will we're not only do this in front of the museum where we, we would like to do it tomorrow in some streets and the city square. Actually, in 2015, I was, I was doing some refugee project in Belgrade, Serbia, also in, in, in the public square. I was invented by the, the European alternatives to do some project in the city square in Belgrade. And, and in the, some projects I, some projects I also, I think the artist should use their imagination to, to, to, to take some new possibility for the, for the, you know, for the social activists. And not just a thing at studio to do some, that's my, that's my basic thinking about today's artist is not the same studio. This should go to the, this should, you know, this should, you know, appeared in the street and to making some square and some, you know, some offside, making some offside projects. And I'm also, yeah. I have also made some project in, in some in, in, in the university, university, in the university, in 2012. Because there's a big demonstration and the protest in the square. So I, I was asked to, to make some projects with the students together. I made a lot of, you know, you know, as recycled cardboard and some, you know, banners and to make, and also some as I, and I, I springed some many sentence on and tried to make conversation with local students. And I wanted to challenge in their, their, what are, what, what they're seeking. So that's a lot of conversation in squares. And that's a very interesting experience for me. So, because many students is it would like to come to write down, write, write, write it down there seeking it. We will have, we have a lot of for, you know, recycled a couple, a couple, a couple and some banners empty for them for them to use. And so, you know, we want to, you know, launch some, some in a provocative conversation, because we want to doubt it is a carnival or some result or something. And that's trying to make some a street discussion with the students. Why do they come to protest the white, what's the protesting it's carnival or. And also, because there are no, you know, opposite to pressure like China, in China, you can't go to street to do any kind of protest, and the demonstration after 1918, because the Tiananmen massacre, and the Chinese Communist Party have cracked down any kind of protest and demonstration in the streets. You have no, you have no possibility to do that. And if you appeared in the street to do the demonstration, you will be, you will be get arrested in, in, in five minutes. So, so that's, that's why they, they try to crack down the pro democracy demonstration, very big one in Hong Kong. And that's why they come to protest because of this, this, this, this committees party they come from the, they come, they took the power through, you know, through weapons so they, they truly believe in the weapon. And the most side is all the power come from the weapon. So they don't afraid any kind of demonstration. They use a very different, of course, after the now that they don't, they don't use the very violent massacre, like a Tiananmen square. So they use a very different way, like, you know, online as villains and censorship. And now they use the infiltration. And then the, the user, you know, the gangster to manipulate the demonstration. And so they use a very, very, they try to use every resource to, to, to take down any kind of resistance in China. And that's, that's, that's a very, very tremendous challenge and fall for the Chinese activist. That's actually, that's why I, that's why, that's why I also I have left there, and I tried to come to Berlin because sometimes you can't do any, you can't do anything. I feel very depression about what is happening around you. I try to do something here maybe to help the people in China. But, but according to the, you know, the experience I have, I have been working in the contemporary art field. I truly believe in today's because I, I believe in the contemporary art, most contemporary art today is very boring because it was very gentrified. And the deep, very deep, very, you know, it's very deep commodity. It's, so it's, so I, that's why I want, I want to create something in non material. And, you know, in material and also some projects that try to know, try to make the, try to make the, you know, try to use some, you know, even for the anarchy, I think some, some of that, it's, it's also have problems. So, so anarchy hospital is maybe some kind of self heal, you know, because you have to self reflection and self thinking about what you're doing and then you can, you can find some, you know, cutting edge or some, and you know, is an interesting idea to practice to, to. So that's what I'm thinking. And, but it's interesting, I have, when I was living in Kaußberg, Berlin, I have found many, you know, many at a, at a case to camp around the Kaußberg and along the, you know, along the spring river, and also the some scoring apartment. That's very interesting. And in, even in the 2018 in the, in Kaußberg, we also have some movement to against the, you know, the Google camp, and it works. It's very interesting. But also I'm, I'm looking at the international activists and anarchists from China try to make some communication platform also, and exchange, exchange some interesting idea about how to build some interesting commerce and awesome, you know, to, to fight with, fight with today's, you know, unfair or some unfairly and also the, yes, today's set and also the today's that the total control. And actually it's very interesting you can, you can, you can find it today. And I'm doing very, you know, technology time also, or some, some philosophy called as a personie. And I'm living the as a personie time at times at times of as a personie. But actually we have a lot of a lot of, you know, a exploitation, explore a lot of censorship, a lot of invisible and the intent, intangible. And I've heard a lot about people and today's today's control is sometimes the invisible. So you can't feel that. So that's, we have to, we have to find that that's less like last time we have talked about we have, we have to know exactly what they're doing, like a government or this financial system. And they're what they're doing then we are, and also this technology background, we should know something about this technology finance and also this national nations, and how to control what's what's their new strategy and tactical to how to control people how to sense the people then we, we, we can know how to fight with them. We can't, you know, we can't fight, we can't fight it like a tradition anarchist tradition anarchist is that it doesn't work now you know, it, it really doesn't work. So we will shoot. We should divide some new idea about how to fight with this is the new, new, new exploitation. Yeah, that's some basic thing, thinking about, yeah, because according to the, you know, the experience and, you know, practitioners, I have done I have deeply believed today, but we have to make the, you know, two, two different, you know, resistance online and offline. You know, it's a, we have to find the weak points of this, this system, and to to fight with them, and not just, you know, not just find them, we're blandly, you know, that's what I'm thinking. All that I'm about this. Do you have some question or Yeah, first I would like to really thank you for your reflections. And I think that the traditions that we have of anarchism, and really imagine other worlds and other practices of ways of fighting these structures of domination, like the state surveillance, the capitalist surveillance. We also had a hack, all these technologies and into into different ways where we can democratize them or I don't know even as we heard before also from cosmic and cyber girls like how to, how to like heal also through this, through these practices and I want to ask you maybe a little bit because you only touched barely on it. I think it's hospital. What is it. How did you thought about it and and how can we make it a reality. Actually, I think you have hospital is kind of, you know, re seeking about anarchism, because as maybe, maybe different, we have, we have, we have also, we have many, you know, and can many different kinds of anarchism before, but facing today's new technology, new financial system and the new social structure we need some new perspective to find with today's the, you know, institution, all of them. Invisible enemies so that means we, we anarchist we should have. In the hospital we go there to check all, all you know, bad or weakly weak points and then we overcome that. Yes, it's, it's, yes, I mean, I mean the several, several. We know our self limitation or self restriction or self, you know, weakness, and then we know how to find them. That's, that's the basic thinking about and key that means every anarchist have a need go to hospital to check by themselves and then in solidarity with other people and we can find them because maybe other people, it's a very good measure for you and like we have a lot of exchange about the new anarchism. So then we can find the, oh, like we are, we are living in the hospital with, you know, with, we treat each other, and we found some interesting idea to work together you know that's the idea. And I see it in a lot of my, my friends were going to see in the left in barely other places like the need to really have a space just to reflect and calm down and not organize for a little bit. And by ourselves individually pretending that we can stop it right and I think, maybe between it to also take back as you were saying is not just the virtual spaces with the physical space so that we can create no landerkey hospital with, you know, running water and, and everything else that we need, hopefully, and yeah, like in Exarchia they have anarchist hospital that helps people when they don't have the ability to pay for healthcare, they could go to these anarchist station for pharmaceuticals and everything. Interesting. Interesting. Yeah, actually, as a policy talk about the real hospital physical hospital it's also works, because in the future. Yeah, maybe in Germany, some, some poor people cannot go to a hospital to do, you know, to heal them self so maybe it's also a good idea to have some you know, public hospital to, you know, I had a key and a key hospital to treat some people. They want, they really want, they really need some doctors also is a good idea. I guess it's also a way of decentralizing the knowledge, but our bodies like today, nobody knows about their own body, you know, like most people, especially us men, I feel like we never thought about our own physiognomy works and so on. We saw out of touch with it and I feel like really creating a space where people can learn about these things. And then also healing each other is a way where like we can all be doctors somehow. I mean that's very idealistic probably. But yeah, yeah, yeah. Yeah, so that's the. And because now we have very, we have the big, the very serious, you know, severe situation with the pandemic after this pandemic station, all the country have, you know, have a stress and and they, you know, squeeze the people's power and you know, trying to centralize the power even the German government, and also Chinese government, they try to take down any, they try to control totally 100% about people. So that's also it's the big hospital all around the world now. So that's like people let us to think about, you know, what we're living around us, you know, yeah. It's a it's a word hospital now, you know, everywhere. Yeah, how do we create those spaces for mutual aid mutual care that you know we can actually like build that bottom up health. Tomorrow we will have people from the hologram, Cassie, Cassie, forget her last name and Max Haven will be speaking about Cassie Thornton and Max Haven will be thinking a bit about this peer to peer health care care tools so watch out for that tomorrow. And then the other thing my own phone I wanted to go back to in your and your very interesting talk was about sort of the this whole thing of the Marxist taking power and becoming totalitarian we've seen that happening in China and in the Soviet Union, in many parts of the world really in Latin America as well. And so, it makes me wonder right because now you know, there are many layers to this, like if you look at the debates now for democracy in Hong Kong, like you mentioned, and also Taiwan as a sort of like some people call the other China or whatever. There is this sort of back and forth between China and Taiwan and also Hong Kong about like, what is the real China, right and which one is communist and which one is capitalist. But in the end of the day, as you said, they're both I mean, China is capital is so they're both capitalist nations so for me I wonder if the future of China might be like an underlist federation. As in like, you know, all these places can autonomously organize themselves, and then democratically decide over what what pertains to them without having this top down to the totalitarian regimes which just, you know, destroy the planet to their practices of exploitation and extraction of resources I wonder if you see the possibility of an anarchist China, whether that's just in John Wong or or in the whole Chinese speaking world. China is a culturally invented term. It's not, we don't call it China before we have because we have many dynasties like from Qing dynasty to Tang dynasty to Qing dynasty, there are many dynasties so every dynasty have a different culture. We also have Mongolia and the Manchu people have gathered us, you know, just have a, you know, so China is very complicated term. And for the anarchist anarchism, the future anarchist future in China, I think in the rural area and some in some of the, you know, not easily controlled area, it's still, you can develop some anarchist community and but now it's the Chinese community, they want a Chinese Communist Party, they want to, they want, they want to make a very large commerce structure, like me, it's like the dynasty, like the Qing dynasty, because 2000 years ago China is a country, it's not like, like Europe, it's like, it's not like a feudal system, it's, it's, today we call it authoritarian, authoritarian or totalitarian, but it's a structure have existed for 2000 years. It's a very strong structure, it's not easy to break down, you know. So and also because Marxism have a spread in China, they combined this tradition, large commerce structure together, and it tried to control every small commerce structure. And they, they, they, they try to, you know, crack down all these small groups, because they want all the, all the group, all the people, they want all the people become send, become send and to, to believe, to believe in the central government. That's why the decentralization in China become centralized, centralized, like you know, what's what you call that, what's the new technology. What's the English name? Blockchain. Blockchain in China become a centralized blockchain. You can't imagine that. So all the new technology in China have become the new weapon, a new technology weapon to control people. It's not to the future. It's not a bright, bright, brilliant future for people to share. It's not about anarchist. It's about control. That's a totally different, you know, perspective in China. So I think it's, it's still have some possibility to develop some anarchist, but not in the big cities, maybe some rural area and some, you know, maybe, maybe in Taiwan, Hong Kong, Hong Kong, that's not possible now, because Hong Kong have a, they have controlled Hong Kong totally now. Maybe in Taiwan, maybe. But I know there are also some, it has some small anarchist group in Wuhan and in Guangdong. They are active. And I don't know if it's because of TV, or the didn't, yeah, I, I don't know what's what's the future but I feel like it should, they will develop, it will evolve by themselves. And the central government don't like this, you know, apparently. Yeah, I wonder why. Of course, because they want to see this thought it's illegal or some illegal assembly, illegal assembly, illegal things. But at the same time, at the same time in China, well, it was interesting. Now you mentioned this whole, like mixture of structures of my the, the, the sort of the ancient, the dinastical patriarchal tradition with mixing with this Marxist state socialism, and the construction of China as this mix of, you know, the use in China as when people do what China oftentimes is hard to separate between the Chinese people, the Chinese communist government and like the place as it's as it were. And oftentimes, I think that that is kind of like this construction of China you were talking about that is so hard to to tear apart which is really just a very new historical thing know. And based on that, I would like also like what about this other democratic traditions that for example, our friend here who was in Roger by Kurdistan was talking about, like you have these these traditions of Taoist Taoist philosophers Taoism, and also even Buddhism. And you say that maybe there is some sort of grassroots remnants there, or was it all chopped out by the cultural revolution in China. I mean the Buddhism or Taoism, they have some. And I mean, where are they, where are they all like eliminated by the Chinese communist regime. And they are free with liberalism liberalism, the liberalist in China, liberalist intellectuals, they are, you know, they, they, their political ideas, try to use the wise to democracy system, try to use the, the democratic system, but they also have a tradition. They are more interesting, you know, anarchy, anarchism. More crazy, as opposed to the liberal. Yeah, because we have lived in the summer commute like it's real. I mean, in some small, in some small, you know, area, it works where I have been, I have experienced a little bit of people's commute when I was very young, because I found a kid. And for me, the very interesting, and also is very a lot of fun, because you can meet a lot of people to eat together. So it's because they have a dining hall for all the people together. And that's, that's interesting, because that's why some of leftist in China, they like to promote and develop some new anarchist platform. And also they have a lot of, you know, media, they have, they also have translated a lot of for article about, you know, about this anarchism. And is there a connection between the anarchist and the Taoist in China. And not very much. Actually people maybe can get the information from that because the Laozi and the Zhangzi also have a very, you know, very, very, very, very, very idealistic anarchism about, but it's according to my understanding, it's can, they want to a small country like Europe, many different small country together. And they don't want this like China now. And their idea is separate, many small countries separated, like Europe. But you know, yeah, but it's, yeah, the book, they also the Buddhist have their, their, their community is like the Christian hermit. But now there's a, I think, because religion have been manipulated by the by the government by political politics also. In China. So the many religions, many temple and church, they have to, they have to, you know, listen to the Chinese community party at first. So that's, that's a big, that's a bigger, you know, challenging for religion in China. And religion is managed by the state. Yeah, yeah, as a communist pass side, you, you, you, you can't, you can do the religion, but you have to listen to the, you can't do any kind of resistance or you can't, that will censor, censor you, or you have to listen to the government as the party at first. And well, so because this is an uncensored place, and we need to close down really soon. Do you have any on sensor ideas or comments you would like to share with the world before we close this this route, my own home. Yeah, yeah. And at last I want to say today we have a lot of space and, you know, physical, physically or virtually to reclaim and recreate and redefine and reinvent, invent. So, so that's a lot of the challenging. Yeah, for us to do, you know, the. Yeah, that's what I'm seeking. Thank you so much my own phone for your time and for your sharing your thoughts and the of the imagination. Thank you. My own phone. Yeah. Thanks, mom. Thanks. Your, your frozen, your frozen. Okay. Yeah. What did you say. I just said thank you for your time and for sharing your thoughts and for allowing us to imagine what other worlds can be possible, specifically from your experience in in China. Yeah, in just in general really with your art. Thank you. Thank you for inviting me to just speak. Yeah. So, I think that people are just seeing, usually the Stalinist and the Marxist are watching as you know, I would like to end on this because there are many European Marxist look at China as the pinnacle of communism and the best place where social happened. And so maybe maybe you want to send them a message. I also have a lot of anarchist, not only Marxist, Marxist, because China is very complicated country. So of course, it has a lot of layers, layers and you know, in, and also entanglement. So, it's not once. It's not one thing, you know, of course, of course, no I just meant about the idealization of the Chinese Communist Party, like this big ideal socialist, you know, place where communism really happened and all this. There is a lot of idealization or romanticism from Europe. I encountered when I came here about about China, not they see it like the left traditional Marxist or more of those Maxis I would even call them are really sort of yeah romanticizing or idealizing China. And it's a bit. Yeah, it feels opportunistic I feel you know in many ways. So encounter this yourself. Yeah, 99% of the European leftist intellectuals they have idealized China. Especially the cultural revolution, they have been imagined out that's idea. That's a place, but actually thousands of tons of maybe thousands of thousands of people have been have been, you know, killed and during that time. That's a big nine Meyer. It's not. So that's the future is anarchism it's not a maximum. Definitely. Thank you. Yeah. Welcome. Thank you my welcome. Welcome. Super inspiring. Yeah. Good night. So now we take another small pause. Yeah, then we are back with circles you bi with Julio. Then we're going to make a nice massage to Julio. All right.
Guerrilla Living Practice in China GUERRILLA LIVING SYNDROME The project Guerrilla Living Syndrome! is mainly designed to launch a kind of social action and social practice on “Alternative Living Practice” and “Guerrilla Architecture”, focusing on issues of the living value in fast-developing cities in China, taking a kind of suspicious and uncooperative attitude to the conventionalized social attitudes, while creating various kinds of possibilities of guerrilla, flexible and time-based living practice through each participant’s own style. We think that today everyone is a sojourner! So we will examine the general experience on individual migration and city wandering, more importantly, we want to get each participant involved in a kind of social “Micro Practice” through this guerrilla and temporary experience, while presenting each participant’s distinctive attitude and value in various forms.
10.5446/52010 (DOI)
So, I'm Hennie of Stratophysica, which is a Berlin-based intermediate dance collective and we've been making work since 2012. Over this period, we did several collaborative efforts to build interactive performance experiences between mediums, including 40 sound animated and creative coded visuals and choreographies with sensors on the body to affect sound and visuals. If you want to know more about those works, you can visit our website in the link I will paste into the telegram chat. But we're here tonight to talk about the current work we're building and researching titled Human ID, which is coming from questions about human identity and technologies like AR, AI, and the ability to deep fake a personality about what is that thing that makes us human and why is it we feel so threatened if it could be overwritten. From my side, these questions began a couple of years ago when I started to teach myself coding and wondering if I could take some of the syntax and put it on bodies and choreographies. And at that time, Daria and I, we were making a lot of loops of choreographies, so I thought it would be fun to do something simple like dot pop or dot push and just play around with the phrasings there. So it started out in a playful way, but soon bigger questions emerged. I became fascinated with this idea of being able to look at the code behind a project like a website and you can clearly see like what is functioning, how it looks and so on. And I started to wonder, well, what is the code behind human, which is not as easy an answer to discover or to even peek in on. But we started with a simple outline. So thinking of the human as a shell that runs scripts or programs and then this bigger question of could we take one performer or shell and exchange the script from the other and erase the original copy or erase the personality of the other before our very eyes? In other words, is it possible to produce a live deep fake in performance? Since those questions emerge, I've been lucky to research and try to test these research from these questions through the support of the combustible residency that we're currently being hosted by Counterpulse in San Francisco until April of next year. And in collaboration with Alessandra Leone, who is a co-founding member of Serato Physica and does the visual content, the video and lighting, as well as collaborations and sensor building, but unfortunately could not join us tonight. The two collaborators that are present is Daria Kaufman, who is a longtime partner in crime in making thoughtful intricate choreographies, dance and is kind enough to have shared a stage with me over the last 10 years or so or more. And I would say the one asking a lot of relevant probing questions to help mine the meaning of the works, as well as Ian Heisters, who is a new media artist working with dance and installation and is the AI researcher for this project. And I will also post links of their works, their websites in the chat. But without further ado, I'm going to let them talk about their parts of the research for this work. Daria. Well, actually, before I talk, I just wanted to show some movement studies that I've created in as part of Human ID. Just take a couple of minutes to show these short choreographies that I call glitch studies. And there's no sound for this, just so you know. Thank you. So, those are just a few short synth Consortium你知道led. So, I have my sort of I have my sort of tongs mud current grille. shouted and this kind ofends to charges of asked or their mannerisms are mediated through video technology. And like so many of us in the world right now, I'm spending so much time on video calls, interacting with people so much that way. And over the past months, there are all these moments when I'm on a video call with someone where it just like freezes on their face. And it's like this strangely intimate and voyeuristic moment. I don't know that they know that they're frozen. I also don't know if I'm frozen or not. And I'm just like staring at them in this kind of punishingly naked moment. And it's like simultaneously painful, but I can't really look away. And I've just been thinking about that and that this is how so many of us are interacting now, that this is just our new normal. So those are like the freeze moments that I find is actually really powerful and kind of poetic in of themselves. And then also the glitches where, there's this kind of stuck in place back and forth. I've always been interested in glitch in video art and sound art. And I feel like there's this, there's some sort of opportunity for transformation in the glitch. For what the thing that is glitching, the glitch is like a vehicle for it to transcend whatever it is. So I've always been interested in that. And right now I feel like, again, through video calls, I'm being confronted by this glitch phenomenon quite a bit. And I also feel like it just has greater relevance in the world right now because, I mean, the whole world is sort of glitching right now. We're all stuck in place. We're not able to move forward or back. And we're just, but we need to move, you know? So this is all that's been inspiring these studies. And yeah, it's related that in terms of human ID, I feel like something we're creating this entire project through video, which none of us have ever done before. I mean, we're working remotely at it. Like never have all four of us been in the same place. And it's just gonna continue to be like this for a while. And it's like a whole other way to create. And it's interesting because it's become like the process of this remoteness and this, everything being mediated by video has become the subject of the piece, I feel, is like how do we actually, how are identities mediated by video technologies filtered by them? So yeah, that's something, that's a little bit of my personal thread. And I guess there I'll just pass it on to Ian. Hi, I'm Ian. Yeah, so picking up from what there is, like the running this entire project through various video networking devices. One of the things I've been starting to think about is how the, because we've been, so we've been working with these neural networks and doing some research around how we can build it into the piece to create certain deep fakes and thinking about how a neural network can learn an identity of a person. If there's any valence of who that person actually is learned by the network. And for me, there's this question of whether the non-neural networks, but the networks that are connecting us now, the Google and the Facebook and the Zoom are while we're working on this project are learning something about the identity of the project itself. And it's like kind of lower tech, less obvious way. Some of the deep learning stuff we've been working on let me share my screen here. Okay, y'all see this. One of the initial things that we started talking about was, so that the piece is very much about identity and how identity is expressed through movement and how a neural network can be interposed into that relationship to start fucking with that relationship. So we researched this deep learning technique called motion transfer. This is some of the early research. This is based on a paper out of Berkeley called, UC Berkeley here in California called Everybody Dance Now. But this is applied to obviously Daria is here along the top and has along here on the bottom. So what we do is we take a bunch of frames of video of say Daria dancing and we run a pose estimation on that. That like reduces her body position to a set of labels about where her different body parts are. We train a neural network on a performance of Hens that learns the visual style of her, what she looks like visually. And then we can apply the structure that we pulled out of Daria's performance and apply it to Hens visual representation. And we end up with down here on the bottom, we end up with this synthesized image of Hens image performing Daria's movement. This one's a little bit buggy. You see like her shadows get picked up by the neural network as part of her body because the shadows look so much like a body. But it provides this opportunity for like the kind of this weird puppeteering and this weird amalgamation of their identities as expressed through their bodies and then through the camera. Like the camera becomes a really important player in this. And it brings up a lot about how you're filming it and both technically and in terms of cinema. So let me show this. So then based on that outline, we can take a video like this of Hens performing a movement. And then we train a model on Daria's visual representation and we can get Daria. This is very buggy still, we're testing, experimenting. Performing the same movement, even though she never performed this dance. Let's put these side by side. And we'll restart them just so you can see the synchronicities. So obviously like there's errors in what the neural network learns about Daria's identity and her visual representation. And it seems to get especially screwed up around the face. I think it mixes up, it's learned a lot about her hair but it's a little bit confused about what's hair and what's face. So kind of like, if you look closely, you can see some upside down eyeballs in her face. And like it's just like all just contorted. That's something we might address or it's something we might leave in the piece. But it's also, it's strange watching these videos when you're making them and seeing yourself like what you recognize as yourself. Oh, that's me, that's Daria. And I, but there's a video of me doing something that I never actually did. It creates this weird dissonance. So coming back to, I just wanted to share some of the research we've done around this. The original paper that we're working off of does the skeleton-based structural analysis of the body. I started playing around with doing, there's another technique that was developed by Facebook actually called Dense Pose. And what you see here, so under this is a, there's a video of Hen just from the shoulders up. And Dense Pose, instead of just marking out the basic structure, like here's the head, like just these two little dots in a line. It actually creates like a silhouette or a mask. And then we can also lay over eyebrows, nose. And this becomes really helpful for different kinds of video. Like the skeleton-based analysis is really, it has to have the full body in view. And like when it loses a limb, it starts to freak out. And so it creates a lot of limitations in terms of what you can do cinematically. And when we start working with Dense Pose, it opens up some of the frames we can do. It still has a lot of limitations. But for instance, we can do a shoulders up shot, medium close up like this. And then we can do sort of a portrait transfer. I'm standing in here because I couldn't get a video from Daria. We can do a portrait transfer. This has other shortcomings. For instance, because, so the Dense Pose, instead of creating a skeleton, it creates like a full outline of the body. You end up with this, again, this amalgamation of bodies where like, it ends up with my face facial shape, including my beard because it doesn't understand how to segment a beard from a body. But then it's like hands appearance on my outline or my body type. And obviously there's some strange glitches around the mouth, probably also due to my beard. It always throws off these kinds of computer vision analyses. So it's like, okay, I'm gonna do this again. And then we can do a little bit of the same thing. We can do a little bit of the same thing. And then we can do a little bit of the same thing. And then we can do a little bit of the same thing. And then we can do a little bit of the same thing. But there's something about a portrait that I've been interested in on this project because like the portrait is this photographic analog of a lot of the things we're talking about in terms of capturing an identity or finding an identity photographically or cinematically. But I've also been playing around with some like unintended mappings, kind of leaning into the space where things start to screw up and you can start to see the edges. Instead of going to this like hyper realistic deep fake that you could use for propaganda, you know, like, oh, here's Vladimir Putin saying things you would never say or Barack Obama. How can we actually go the other direction and find the edges of the algorithms and where they screw up? And because then you can kind of get a perspective into what's actually happening and what it's actually doing. So here's some experiments I did with mapping my hand to my face. And so the neural network is learning, you're basically telling it like this hand equals this space. And you do that tens of thousands of times and it's able to resolve. And then when I show it a different hand in a similar position, it's able to say, oh, okay, that's this face. And the face you see on the right, well, the face you see, I hope it's on your right, is completely synthesized. But it maintains a lot of the details of the original photography or cinematography. But then you can also start to see these weird artifacts of the algorithm, both in terms of like this cross hatching that you see on the beard. And sometimes it resolves really clearly and you're like, oh, that's a photograph of a person. And then sometimes it almost, you see more of the hand. And so in working on this, like one of the things I'm interested in exploring is like, what is the algorithm actually learning about me? Is it learning anything about my identity? Hold on one second, let's just go away. Is it actually learning anything about my identity? Is there some kind of vectorized representation of my identity in the same way that a photograph, we can say captures some dead moment of somebody, some little slice of their identity, you can see it shine through and that's legible to you. Is there some sort of, you know, end dimensional representation of that person in the network that's left behind? Is this a way of maintaining a memory of a person? Yeah, I think that wraps it up for what I have to say. Thanks guys. So we have with us today Chris Harris and Miriam Simon, and I would love to invite them to introduce themselves. Sure, I'm happy to go first. Thanks, Hen. Thanks, Ian. Thanks for being here. Thanks for being here. Thanks for being here. Sure, I'm happy to go first. Thanks, Hen. Thanks, Ian and Daria. That was interesting to see what you've been working on there. Yeah, my name is Chris. I have had historically quite a technical background. I studied AI quite a long time ago and long story short, became quite, although I started studying AI because I was fascinated by human data and human computer interaction and biomimicry, I became quite disillusioned with a lot of the uses of the technology. And indeed there's a lot of critique about the way we currently approach AI from the direction of which we approach it, mimicking or rather taking the human intelligence as the apex of intelligence and trying to mimic that or using it to subvert our own intelligence. I appreciate the investigation that you have done into where things break down and using this as a critical investigation into the scope of our relationship with identity and how much a machine can or cannot learn. Yeah, so I'll let Miriam also say a few words about herself who I also know is a very interesting HCI practitioner. Miriam. My name is Miriam Simeon. I'm an artist. I also actually do user experience design, so I think a lot about how humans interact with machines and just broadly as an artist interested in the social and poetic and political and weird implications of new technologies and bodies. So yeah, really excited actually about all of you, everything you guys said. Cool. Daria, I had a couple of comments on your rather observations on the glitch studies that you showed. I really liked what I mean you mentioned the time we spend on Zoom and the Zoom fatigue we get and I feel like a lot of that is because of the lack of feedback we get in a virtual capacity as opposed to in a physical capacity and you mentioned the role of technology in mediating and indeed in reducing that bandwidth we have between each other. What I felt what I really saw in your glitch performances was this asymmetry of perspective. I think we always to some degree have an asymmetry of perspective even when we're one human talking to another human, albeit our context very much grounds us in some kind of mimicking of where we are but when we're so distant I feel like the symmetry of perspective is so. There's so much dissonance there between what I see and what the other person sees, and especially when things break down and and start to glitch out and you become this boy in this slice of reality, like you like you mentioned this almost almost punishing voyeurism you can't quite look away from, I think we're constantly fascinated by what is quite an unconventional viewpoint of a normally conventional scenario. So yeah I appreciated your your studies there at some point. Thank you. Appreciate that. Yeah. I have a question. Sorry, I just interrupted. Okay, part of the zoom beauties and I'm just so I've recently I was just like, well okay there's a lot. But I'm going to ask like the last question I had first because it's kind of a really big one. So, one of the last things Ian said but also kind of all of you guys are talking about like what is the algorithm learning about me. Is it really learning about my identity. Then there's this question I would pose but I'm not about what was starting with around like personality getting put in another shell but then like personality here is equated to movement, which I get but also that's a big thing to talk about. But then I just recently was listening to like Susan Sontag talking right and the whole thing of like taking a photo and what the photographer takes from reality takes from that moment takes from the subject. And this is just a still a still photo that you're printing right and so then it really like kind of exploded my mind of everything you guys are doing because what is. Yeah, like what are you taking. Oh yeah, so that's my first question. It's kind of a huge one but how like, because also I guess because you guys have started doing this, and I actually see it a bit more like, you guys are finding really the beautiful part and where, like where the machine gives you more rather than takes away. Then like a human to human. But I do wonder like having done this process like, and yeah, like, what is it taking from you or what are you taking from each other or. Yeah. I can speak to that a little bit. There's a lot in there. Well, I guess I'll say that I'll focus on two different threads. One is like. I wanted to mention that like earlier on in our process in introduced us to Miriam Boyn. Is that her name means. That Lana Boyn. Thank you. I think that's kind of a touchstone like that was really useful for me to read and it keeps returning this wanting to look at the flaws and the errors. And then you go towards the machine, you go towards the mistakes. And that's kind of, and I, yeah, so that's kind of touched on through the project. And then something else you were saying Miriam. Towards the end, I've lost it though about taking the taking like what are you guys. And it's like what is being captured. It doesn't even belong to me. The thing that's being captured. Where, where does it lie? You know, I think that's a really interesting question. And I understand people feeling so threatened by these technologies and the notion that someone can take your image and make it to anything you want. And I think it's described as. And this violation, but the thing is, the thing that's being taken, it's you and it's not you. It's the representation of you. So I think it's just the territory right now. And I'll leave it at that. And I'm surprised to hear you say that you like that we're creating all this beauty. Because I think like what it's taking is so obvious to me and these like these strange, like neural pastiches of different body parts and this like strange like, I mean it's like, it's grotesque. And I mean, where the beauty does come through is, it was something in the solar punk, like manifesto for this event of like imagining new possibilities for our relationships to technology. And that's like, also from the Svetlana boy and she talks about it in terms of nostalgia for like a future or even a present that isn't not like is a parallel present to the current. She calls it the off modern modern. So we have our modern, the dominant modern ideology that AI is going to save us and it's also all going to put us all out of jobs and Elon Musk is, you know, creating this, this modern notion of AI. I mean, and I'm using him to the standard and for a lot of people. But then there's an imagination or a nostalgia for this other possibility we had where technology was built for for these kinds of interrogations that we're doing for creativity and for thoughtfulness and for beauty. And part of what we're doing is, is, I mean the tools that I'm using are, we're built for like automated driving, like a lot of it was built to train cars on how to drive. And so you were we're taking these like high grade industrial tools and and hacking them so that we can turn, we can create this other possibility for an application of technology that is a little more introspective. And I don't think that answers any questions. Are you trading like, this is my question. Are you trading the, or what, because you are the thing I find beautiful is like all these. Yeah, like the beast Daria made and all of the weird glitches and the hand inside your face like all this crazy beautiful to me stuff that is in the air. And like, what are you trading for that. That's, I guess, that's a good question. I wanted to just maybe follow on from that a little bit and extend that into the question that I had that from, especially when Ian you're showing the puppeted movement and the fact that the machine was seeing into different parts and projecting more human there perhaps what I was really, I think there is the bias of identity that we have, because we know we as humans have a bias of what constitutes as our identity, and the machine now has a bias towards what constitutes as as identity, based on on what is trained it. And I think bias is right there because no two are necessarily correct. I mean, what, what is our identity, and what is, and what isn't what is the environment, what what is the machine right to project a human into the side of the wall. So the question that I wanted to let up to for me was asking you as artists and practitioners doing this work, whether you've had any shifts of identity or whether you've grappled with this or if there's been any, anything that's kind of come to mind in that way that's that's the door that you have new considerations after doing this. And really, this project, like, it's been this very extended because of coven and just working this way working through technology so much. Like, you're interacting with someone, there's you and there's them and then there's the thing that is emerging between you. And I feel like in some ways the technology that's based between is bigger. So it has, there's like, because there's more of a disconnect there's also more potential for something to arise in between. And I'm just be like in our process, for instance, for a while I was like really frustrated. I was like wanting to hold it and know what it is, like be able to frame it neatly and I kept coming up with a frame like it's this and then we would get on a call together and we would be like, I want that so that okay now it's like this back and forth between being with them virtually and then being on my own, and all these the storytelling that goes on and I have my own storytelling that I do on my own and then I have to try and reconcile it with them on these own goals and it was just getting exhausting. So eventually I just kind of let go and was like, okay the story is just it's emerging. And I need to not try to have so much control over it. Stop trying to hold it. I'm like, let it be what it is. And ping pong is something that we've a structure that we've landed on. Let it ping pong, and each time the ball comes back. It's a little different. And that's okay. And I don't have to hold it. That's the same. Yeah, that was that was beautiful. Yeah, thank you very much, Daria. Like the identity is created in the spaces in between and just to let it be and to not be afraid of that and to know that that can happen and I can still be me. And for us. And that's okay. Brilliant words to end on. I think that's about all we have time for so thank you very much Daria and Henry and Miriam pleasure to speak with you all. Thank you. Thank you.
German es wird nach dem Intro ein falscher Titel angekündigt! English Collaborative members Hen/i(@stratofyzika), Daria Kaufman and Ian Heisters in conversation with Chris Harris (@hellokozmo) and Miriam Simun(@miriamsimun) to discuss their current work “Human/ID” - a performance/video installation. Kaufman will discuss her "glitch studies" - short choreographies inspired by some of the 'errors' commonly experienced through Zoom (and other video call platforms), such as freeze frame, lapse, and glitch. Heisters will present the framework for deep faking dance and discuss the research into deep learning, dance, and film
10.5446/52011 (DOI)
So, yeah, so thanks again for being here. And I was really excited to hear your podcast that you did with my friends, Simone Shichirou and with Sina last year. And there was a bunch of takeaways that I find extremely interesting because they combine aspects of epistemological shift and also apply to how we rethink the regulatory framework and the political framework, but also this connection with a shift in consciousness, especially consciousness bound to certain geographies. So you made a comment about Western countries and Asian countries and especially Asian countries that they still think they can get capitalism right, you know, some this kind of modernity, this resistance of modernity that is trying to permeate also other countries. And then I think something that is really resonating with me specifically is this pulsation, this regeneration that is also tied to a new way of accounting for the energy flows. And I think this is a very nice stage because I mean, I think after anybody study a little bit of what's going on in the world, you cannot help but thinking that we need to somehow tie our thermodynamic nature of our biosphere with our constitution and with our laws. And of course, it's a problematic space, it's something that is not so also technically easy to do. But I'm very excited and curious to hear what you've been working on in that direction. And also I think it's a nice stage because increasingly every year the CCC is hosting climate aware and climate adaptation talks that are very specific also in the nitty gritty of, you know, all the planetary boundaries that we're hitting, inviting scientists and politicians and different perspectives. So I think this is a good direction that we are seeking as a Congress. And yeah, so I leave the stage to you. With us there is also Oliver Souter which will is joining us at the end of the talk, but he just joined us in the room and Jacob Hoon, which will also make some some questions at the end. So I'm gonna leave the stage to you, Michel. All right, thank you so much, it's the first time I talk in a CCC related event, so I'm very happy with that. Well, so basically what I want to do is, you know, we wrote a report at the P2P Foundation called P2P Accounting for Planetary Survival, in which we looked at emerging new forms of accounting and what that tells us about how we can reorganize our economic flows. So basically, can we produce for human needs within the planetary boundaries, which assumes that we know when we make decisions what those planetary boundaries are. So the key issue that, you know, we have to solve as humanity is the issue of externalities, right? So positive and negative social and ecological externalities and the system that we have now, which is only based on, you know, the value of commodities, including the value of labor as a commodity, does not take into account these externalities. And there's been various studies showing that market pricing, you know, does not actually reflect any dangers to the ecology of the planet. We have the sixth extinction of the animal kingdom and it's not reflected in our economy. We can't see it, right? And that has a bit to do with the history of accounting and I maybe want to start with this by saying how important the accounting is in the history of humanity. So for example, if we look at the origin of accounting, that's actually also the origin of the state. So it's in Mesopotamia, I don't know, 2000 BC. I've got the exact dating, but it's in the book against the grain from James Scott, which I really recommend. So the first tablets of clay with language are actually accounting tablets. You know, it's the grain coming in and out of the of the royal temples. And so that's very important. And I learned, for example, that for 1000 years language was not the the written language was not used to represent speech, but to represent accounting. That's really interesting. Then the second phase is the reorganization of accounting by a Franciscan monk in Italy, Luccio Patschiovi, pronounced it right. And he invented double entry book accounting. And this is very important because double entry book accounting is really about what comes in my entity, my closed entity, my firm, and what goes out of my firm. And again, you have no vision of the natural world of what you do to people. It's simply, you know, economic value and whether you have profiting capital going up within your own. And so it's really more kind of narcissistic accounting that is the basis of our current economy. And so here's the so the question is, can we change that? Can we have forms of accounting that can reflect not just the value of the commodity, but the value of, you know, what we do with people and the environment? And so that for me is a third accounting evolution. And it's connected a bit to the emergence of the blockchain. Whatever we think about the blockchain, what it does, it creates an infrastructure for shared accounting. So we can move potentially from closed accounting to ecosystem accounting. And so nearly all the blockchain projects you can look at, that's what it is about. They are open collaborative systems where people can come in and out. And they can share accounting and logistics to the degree that they want to. So I think that's a very important step. So sometimes I say to make it easy for people who are not technical and I'm not very technical myself. You know, we move from the Internet of Communication to the Internet of Transaction. And what that means is that what we have developed around open source and urban commons, you know, what I call the Stigmergic Collaboration, so mutual signaling, that is now possible to do in physical production. Before the blockchain, we did it for open source, so knowledge, design and software. But now we can actually do it for everything, everything that's represented through accounting. Any flow that we can represent through accounting can be subjected to the same capacity of looking at each other in an ecosystem and adapting our behavior in real time if we want to. So that's really what I want to talk about. So what we did, and this is what we usually do at the P2P Foundation is, so we don't say how things should be, we actually look at what we call seed forms. And seed forms are, you know, basically you have a system in crisis. We can solve it within the logic of the system. And so people start exiting the system and try to solve their problems in new ways. And these new ways potentially represent, you know, something that comes after, after the crisis. So you think about the medieval times, you know, you have the invention of purgatory, so suddenly you can be a Christian, you can earn money, there's the printing press, there's the voluntary book accounting, so this is actually the basis of capitalism. So that's what we're talking about. And so in the report, I mentioned three new forms of accounting. The first one is contributed accounting. So typically any open source community where, you know, which is open for permissionless contributions will have the same issue, which is you have a whole community that is co-creating value, co-creating use value, something that is useful for everyone like the Linux operating system or Wikipedia or whatever. Now then the market, you know, over time a market is created around these open source comments. And then the issue becomes well only some people can actually realize their value through the market. And so typically what you have is a lot of people will contribute to an open source system, but only some of them, you know, can make a living through their efforts. And so there's a question of equity and fairness, which is typical for open source projects, which is how do you deal with that dynamic? And so the solution to this is contributed accounting. So very easy, the basic principle is you have a community, you have some kind of membrane around the community. And what you do is that you say, okay, so whatever comes from the outside, which can be government subsidies or, you know, market income, and now I have a plane flying over. So I'll just pause for 15, 20 seconds just to make sure you can hear me. All right, you can hear me. I'll just continue then. So so contributed accounting are basically schemes where you differentiate between the outer and the inner. And so you're going to redistribute a part, at least the part of what is coming in in with a different system internal to the community to make a very simple example, you have a transglator's collective, they do a lot of things for free. So they count the characters, and they say, well, anybody has a contract to translate a book, which is partly due to the co-creation of the value we did together. Well 15% of that will go in the second accounting scheme. And that will fund the contributions, right? And you know, my friends at Commons Transition, they have this booklet about this co-coops. That's about that. It's how do you create internally a more just and fair value distribution based on not just commodity value, but on contributions. So basically, you know, I call this value sovereignty, right? You declare within your community that you're going to treat the value in a certain way that is different from the mainstream economy. And by the way, this is nothing new. If you know Bernhard Lieter, the mystery of money, which exists in German, he will tell you that throughout history, there's always been two kinds of currencies. You had the royal currency, the extractive currency, which was to fund the army basically. And then local communities often had different currencies. For example, in Bali, I live in Thailand. So this is my neighbor in Bali. They have a currency for the watershed, you know, to fund the watershed maintenance in that, in these communities. It co-exists with the national model. All right. The second form of education of accounting is flow accounting. And that's something that exists for the last 20, 25 years maybe, but is now taking off. And that's called REA, resources event agents. And I call this post-capitalist accounting because just like contributive accounting, it moves away from the pure commodity value system. And what you do in flow accounting is saying, well, you don't have double entry. So you don't have a closed entity, but every transaction represented is a 3D picture of where you place this in the ecosystem, right? The resource, the event, the exchange, and the agents that are doing this transaction. And that is visible again to all the people in this open ecosystem. So this is the second form of accounting. And the third form of accounting that I discuss is called thermodynamic accounting, you could say. So matter energy flows. There's different projects. You want to look at that. M-C-S-M-M-U-S-I-A-S-E-M is based in Barcelona. But the one that I looked at is called global thresholds and allocations from R3.0. And these people are multi-capitalists. So they want to take into account not just financial capital, but environmental and human capital. And they want to treat these three couples equally. So this means that today if you cheat with the finances, you can end up in jail. But you can end up in jail if you do something bad to the people or the planet. If you recognize those three capitals, then you would actually also be liable for how you treat these externalities. Okay. We're not there yet because that would require a lot of legal reforms and probably social struggles. But one of their innovations, which is really interesting, is called global thresholds and allocations. So what does that mean? Is that you have some institution, a global institution, that basically is a group of scientists that monitor the availability of resources. So not just the planetary boundaries as described by Kate Rayworth, but all the commodities. So there's a list of 131. They call this commodity ecologies. And so imagine a group that just keeps track of what is available in the world in terms of copper. What is the expected findings of copper per year based on an historical average? What is the growth of productivity in the use of that material? And also what is the bio-circularity? So if you reuse copper, how much copper is left? I think it's 70%. So they have these tables. And so what we can do and what they're prototyping and experimenting with is diffusing this knowledge of flows of matter and energy into every accounting system. And that begins to become quite exciting to me because that means that you have the capacity as an agent. So it doesn't have to be top down like rationing, which could happen if we continue like what we're doing today. But you actually enable and empower all the Asian distributed system to make decisions within that framework. And I want to continue with this, which we're kind of trying to convince you that this opens up a vision of a new what I call a cyber physical infrastructure for production for human needs within planetary boundaries. And it's based on three different levels, and I think most of you are too young to know about these debates. But in the 1930s, there was a huge debate. It was called the calculation debate. And you had the old-style socialists who were saying, we need to plan, rationally plan the economy. Not just the markets, we had Karl Polanyi and others were also involved in this debate. And on the other hand, you had Schumpeter and Hayek saying, no, this is not possible. We need markets to do that. And of course, the market economy is one of these debates. But I think it's possible to do three things, which is take the best of the commons, so stigmatic collaboration, take the best of the market pricing as a signal for allocation, and take the best of planning, which is, but in the sense of frameworks that determine the degree of freedom within certain frameworks. So the first layer would be stigmatic collaboration. You're in an open ecosystem. You see what everybody else is doing. And so you can adjust your behavior because you have a whole optical knowledge of the whole system. Just as if you do Linux or Wikipedia, you know what everybody else is doing, and so you can adjust your contribution. That's the same idea. But it's now moving from just knowledge, design and software, to accounting and logistics, and coordinating physical production. The second thing you can do is to inject what are called generative market practices. You might be familiar with Radical Exchange. There's a group of people that are specialized in this. You know, how do you can use market techniques to actually have fair and equitable outcomes, how to do that. They focus on that. And for example, you would have a proposal like Fishcoin, which basically says, you know, this is a cryptocurrency for the fishing industry, which represents the reproduction capacity of the fish. So it's not the money that is blind to externalities, but it's a form of money that is intelligent and integrates that knowledge in itself, right? So you only distribute as many fish coins, then there is fish that you can fish without endangering the reproduction of the fish. It's just one example. So you could have a whole series of these specialized currencies that, you know, manage specialized markets within certain constraints. And then the third level is what I call orchestrated planning. And so I'd like to introduce a little concept. I call it the fifth magisterium of the commons. I got it from a friend of mine called Robert Conan Ryan. Sorry, I forgot his name. Robert Conan Ryan, I think. And so his idea, and I resonate with me, is that we have four magisteria. That's politics that regulates what we get from the state. The economy, who gets the surplus, the culture, what can be said and not said, and science with legitimizes facts. And you know, they're relatively independent functions in our societies that they're even beyond the nation state. They regulated that trans national levels. And I would suggest that we also need a magisterium of the commons. In other words, we need a capacity to manage and protect human communities and extra human resources and communities, right? So basically the humanity embedded in the web of life. And you might be interested in that. I won't have time to explain. We have now movements like the sovereign nature movement that proposes to have DAOs, right? You have forest commons. It's the DAO. It has sensors. It can mobilize lawyers when there is overgrazing or overcutting of its woods. It can generate income and it can actually distribute that income to the contributors who protect that commons in that case, that forest. Just an example of how this might work in the future. But so you see, you have a kind of an idea that I proposing is to integrate the best of the commons, which is the collaboration, open collaboration effect, the best of the market and also orchestrated planning, which is used as a framework. So you're free within a certain framework. You can do what you want, but you can't destroy the conditions of life. That's the basic idea, all right? And so we need institutions that can protect that. And we don't have them yet. But that's basically what I wanted to tell you so that as we transition to a more stable human system that can live for a longer period of time in balance with nature, that actually we are starting to have the technological means to do that. And these already exist. They're being prototyped, experimented with. So this is not just a topian. This is actually something that is ongoing. And of course, technology alone is not the answer because you need institutional change to make that work as well. But as you know, we work on public commons cooperation protocols at the P2P Foundation. And that's also moving quite fast in the world. So I think that what they say in China, crisis is opportunity is the same word. So the more we enter in this downward spiral of systemic crisis, the more also people are motivated to look for alternatives. And so all these seed forms start to make sense and all these patterns start into connecting with each other and can kind of be a prefigurative vision of a potential new economics. And so I also finished, and you can ask me maybe via Eugene or something. I have a three chapter commons economics textbook that we are pre-publishing privately for the moment. So if you're interested in knowing more and then P2P accounting for planter survival, if you look it up in Google, you'll find it in the full text, which explains the story I was telling you now. So that's the basic story. Thank you, Misha. Thank you very much. I think our audience really enjoyed it. And somebody is asking in our telegram Q&A, which I also invite you to join if you want. They're interested in all those reference. And I told them to look in the book, but I also ask you if you want to just share a few links with me. Also via email, I can put them in the chat. And yeah, actually, let me just change the view so we can also have Oliver and Jacob on. And Oliver, I think you were really interested during the conversation. You're muted. Oh, I'm muted. Yeah, thanks, Michelle, for taking the time to lay this all out. There was so many interesting and new concepts also for me that I want to dig into afterwards. And it was nice. It's the first time we managed to actually see each other on a vehicle. Has been a while. My question that came up is like, when you're a small organization or a startup that wants to start deploying internally these processes of better accounting, better value, specifically value accounting. What do you recommend diving into reading what practices are really useful or maybe small practices that you at the P2P Foundation already use to do this? Maybe what go there is no handbook for the moment because all this is fairly new. So we have one little booklet. It's called Value in the Commons, which is just to help you think differently about what value is. Then, of course, P2P Accounting for Planetary Survival, which gives you access to these sources. So like Sensorica, for example, is a really good example of a very sophisticated contributor accounting system. They've been doing it for 10 years already. The disco book, which is a Sensorica, it's an open hardware commons. They make sensors. And they have a very sophisticated... Yeah, it's Canadian, Tiberius Brastanival. I hope I don't misspell his Romanian name. And they have a lot of literature online about the project. And they're very sophisticated. For example, they put all the collective property, which is called non-dominium. That might be interesting just to explain to the audience. So dominium is the normal private property, I'm the master, dominus of my property, and I can do what I want with it. Non-dominium is it belongs to nobody in particular, but to everyone at the same time. And so what you can do, for example, in the case of Sensorica, is you need a machine. You can crawl out from the machine, but the machine goes into the property of the trust. And so anybody else in the system can also start using the machine. So you have a system then for the collective use of the machine. So you're mutualizing the hardware as it were in that system as well. So it's a very interesting and sophisticated system. And then disco, which I forgot what it means, but it's kind of like a DAO, but for the commons. And that's by friends of mine, Stacco Tomcozzo and commonstransition.org. So you might have a look in that. I have a section in my wiki, which is wiki.p2pfoundation.net. And I have a lot of documentation under the section P2P Accounting as well. So until now, basically, unfortunately, what you need to do is talk to people like TB or me. And we know many people, we're very well connected with the people doing that. And we'll say, well, talk to them, talk to them, talk to them. And then you will learn about it by yourself. And then you can say, OK, can I contextualize this in my own situation? And then do your own thing. There's no standardization yet. We are in the period of Cambrian explosion of these new accounting systems. Yeah, and this direction goes my question as well. Like, what is the field where you feel like you gave some examples? But like, I mean, interesting how this pattern then also will interact, how the fish coin will interact with the tree coin. I mean, but how do you envision the implementation of these new accounting systems? How is the interplay between, I mean, you have some foresight? Yeah, well, that's a bit of a problem. So it's a bit what I said about the Cambrian explosion. So it's a big problem within the commons. And like, I should give you an example. So I was in Tuscany a few years ago, and they had 16 different pieces of software to order organic food in the Solidarity Economy only. So this is a big problem. People are reinventing the wheel all the time. So what I'm advocating and I actually believe in public commons cooperation. So I think that we will have public institutions in the future as well. There's no way around it. And so we look at, can we have optimal relationship between the commons and the public sphere so that the commons remain free and autonomous, but can still cooperate with the public sector? And so here's a thing you could do. I don't know if you know John Takara, he works with something called Factor 20 Reduction, which I'm very enthusiastic about. For example, if you want to do transport of goods and services in a city like Berlin, if you use cargo bikes and pedalex, you can do that with 98% less energy and still do the same amount of transport. So the idea is that we will start, we have to start mutualizing our provisioning systems, all of them. You know, real estate, housing, so shared mobility, shared housing, shared organic food. And then we build ecosystems that in those provisioning systems. So for example, community supported agriculture is a good example, right? It's an ecosystem that combines a group of consumers and a group of organic producers. And you know, they're still buying it from each other, but the ecosystem is a commons. It's not based on a parallel relationship, it's based on solidarity and cooperation between all the partners in the ecosystem, right? And so the kind of basic attitude that I advocate is the idea of reverse cooperation, or reverse cooperation, sorry. What you do is, what kind of collaboration do we need with public authorities to make our commons work? And what kind of market mechanisms do we need for commons to make our commons work, right? So instead of having the opposite, which is capital eating the commons and extracting value from the commons, you have to reverse the logic and start thinking, like, how do we preserve our community? How can we preserve our livelihoods? And start organizing ourselves so that we get a bigger part of the surplus. And if you think about the blockchain that way, in Chiang Mai, it's the capital of digital nomads, so we have a lot of people here developing these ecosystems. And that's already how they start thinking, right? When you do a crowdfunding, an ICO, and we can be critical about the hypercapitalist aspects of this, but it can be done in different ways. Basically what you're doing is, first of all, you're distributing the capital. You're no longer dependent on just one bank or one venture capital. The whole world can participate in your project. And you consider the coin as micro shares in what is the expected value of your project is. And then they keep 40% of the tokens for labor. And that's really interesting. And then you have progressive groups like Exa, Economic Space Agency, like Commons Engine, like Commons Stack, like the Holochain. And these are all blockchain or post-blockchain oriented projects that have these other values embedded in them. And then finally, sorry, I'm rambling on, the idea is to create what I would create is coalitions of public players, solidarity and impact and solidarity finance players, and the Commons to mutualize the development of these infrastructures. Right? So if you do shared mobility, like Coopcycle, for example, you could have an alliance of cities supporting this for all the cities in the world. If you want to have fair BNB type solutions, you could have these alliances funding open source depositories. So this is something I call protocol co-ops, protocol cooperatives. They invest collectively in open source depositories, which can be used by everyone. And then you contextualize it, you localize it, but you don't have to redo everything every time. And that's the problem. So if I can say one more thing about this, I've been following open source cars for 12 years now. None of them has worked except maybe open motors, which is more like a white product. They don't really operate publicly. And local motors, which is just crowdsourcing this line, but none of them has really worked well. And now you have this company called Arrival, and they do everything with proprietary software. So finally, we have distributed manufacturing, but instead of having an open system, we're going to get yet another proprietary system with one company controlling everything. And that's very sad to me that we had 10 years to do this, and we haven't succeeded in doing this. So if we're too slow, we're going to miss a lot of opportunities. And spring is actually to one of the next questions I had, which is in thinking about building up these regenerative processes of value counting, and now the example that you brought with the open source cars, why do you think this didn't work? Why do you think it only worked once or like started to work? I never heard of that company, so I assume it's, but a player that you consider as being maybe more, more. Yeah, they have 1000 software engineers, so they're extremely well capitalized. And they have a big factory in London, and I think they're actually in Berlin as well, but I'm not sure about the details. But they have their videos, you can check them on YouTube. It's very impressive. They have the whole thing is ready. It's like hub and spoke. You can install the factory in a week, and then they can start building vans and buses for the cities, which is something I dreamed about 10 years ago, and technically it was possible. So the problem is that open source itself works because it's bodies voluntarily working. You know what I mean? Like any software engineer can say, I'm going to open source software, and you can live with your parents until you're 40 and you can do it. But once you need buildings, materials, you need capital. And so what I'm thinking we need to do is to find a connection between the cooperative world, which has a lot of money, you know, the big co-ops, and the open source world. And that hasn't happened yet. So that's the problem. So it's really becomes down to funding. It comes down to funding and to find ways to solve that issue, which hasn't been solved yet. So what we're doing instead, and this is not bad, of course, is that we're starting with all this small stuff, right? So I don't know if you know the multifactory model. So these are craftspeople coming together all over Europe looking for empty factories. And then they make, they work with wood, 3D printing, metal, iron, and they work with open source and cooperative principles. And there's 120 of them in Europe already. And they have an invisible factory where they kind of do their open source collaboration. So they exist or in France you have a huge federation of farmers called Atelier pays on. They already made like thousands of designs for machines for farmers, open source machines. They're much more successful than open source ecology. So these things exist, but they exist where they're low capital requirements. So they still function a bit in the periphery of the system. You were already unmuted, Jakob. You need to unmute yourself. I feel we definitely need the follow up conversation on this one because we will continue the placemaking the solar punk beyond the CCC as well. And I mean, we will come back later today to the scope of the city having a conversation on municipalism, metaphylicity movement. And for the framework of placemaking the solar bank, we were really coming a lot from the kind of really neighborhood scale level. And a lot of the ecosystem building just needs to come from really a different even state innovation or like from different kind of powers that support that. On the other side, I'm already very inspired. And I feel we can just wrap up here and keep the continuity open as well as for our listeners in the chat to engage further and see how this can be. Yeah, yeah, I'm open to any form of cooperation or more talks, whatever you need to. Cool. And here to learn also, of course. Awesome. Thank you for joining, Michelle, very much. We'll see you in five minutes with an exciting workshop on this channel. Stay tuned. Yeah. You
What kind of cyber-physical infrastructure can insure that producing for human need can stay within planetary boundaries? This is based on the recent P2P Foundation report, 'P2P Accounting for Planetary Survival', which you can find here: http://commonstransition.org/p2p-accounting-for-planetary-survival/ Kate Raworth's Doughnut Economy proposes an economy that can navigate below an ecological roof, that preserves planetary boundaries, and a social basement, which insures the basic needs of humanity. In our study we looked at three inter-related emergences of shared accounting and distributed ledgers (contributory, flow, and thermodynamic), and the possibility of a 3-layered cyber-physical infrastructure that combines the best of the commons (open stigmergic collaboration), of generative markets (that integrate externalities) and orchestrated planning limits to protect resource constraints.
10.5446/52013 (DOI)
You can just take a moment to sense into your breath and take a deep breath in. And let it back out again. And let yourself totally relax. Just sense into what it feels like to be you right now. The feeling of your sitting bones, of your hands on your lap or somewhere else. Take a moment to think of everything that is solid and resistant in your body. Everything that gives you form, your bones, your feet on the floor. And we're just taking a moment to call into mind everything that is solid and earth element in your body. Take a moment to say hello to your teeth, your bones. And the material of your bones, calcium carbonate. The same material as the one out in the world, making up the rocks. The same atoms in your bones as those that make up the cliffs outside. And all the time we're actually shedding atoms, shedding skin, shedding hairs. And we focus into this solidity that we share with the world outside with atoms that have travelled through billions of years to be there. We allow ourselves to imagine these solid atoms travelling now in a different direction. As we shed, cells, hairs, all of these physical parts of our body all the time. We allow ourselves to travel, speed up time. Perhaps we become rock and then become human again in a different time. We're just playing with everything that is liquid in our body. You can take a moment to presence with the saliva in your mouth. You can see the blood in your eyes, moisture in your nose. You can feel into the pulse in your blood. You can now think again of everything liquid outside of your body, vast oceans, rivers, rains and clouds, water in the soil. Imagine the water inside other living creatures out in the world. Imagine the blood of a salamander, the saliva of a sheep. And all the liquid in your body is continuous with that liquid outside. You can see the water molecules in your breath coming out of your mouth right now. The same molecules acted on billions of years ago by plants. You might even share a molecule, a water molecule in your mouth right now with a dinosaur billions of years ago. And now we allow everything that is liquid to speed up its cycles and travel forward. Passing through clouds, oceans, rivers, meeting with your solid atoms in the year 2077. And slowly you allow your atoms and parts of yourself to assemble in this distant but immediate time. A time that looks quite different to this time but still recognizable. If you imagine all of the micro changes that have happened, it's tiny decisions that have led us to a different way of living. And you tune into a discussion amongst three friends, three beings who were alive in 2020, that who have sped through the hallways of time, the channels of time, and are living right now in the year 2077. It is New Year's Eve that year. And they are meeting as they have done for the last 57 or so years to look back on the year that has passed, look back on the many years that have passed, and plan into the future, set goals, ambitions, but also share disappointments, mistakes, failures, and successes. So when you're ready to open your eyes, please do so now. And join us for this glimpse into this far off solar punk reality. Fatopian boy, can you, can you hear me? I can hear you, so I can go. Yeah. Good. I think we've established connection. Then type on what what, how are you tuning in today? Yeah, it is like, there is not a good string connection, but I can hear you. I'm trying out the latest hologram technology, but it's a bit, it's a bit glitchy. They haven't managed to do the mouth very well, but yeah, sadly, I can't be physically, physically in the stream today, but hopefully this will, this will be all right. Yeah, how, where do we start? I haven't actually made a plan for this. I just thought, you know, we'll just log on and kind of work it out as we go. Well, we basically have two days left. Before we need to start the work again. So I think like we could just have a little catch up of, you know, what have we done, what is left on our to do's and, and what needs to be done for the year ahead. Yeah. I got into quite a reflective mode actually I was thinking back. I was just thinking back to this, you know, I know we're meant to look back at just the year past but I don't know, especially the last 20 years has been so mad. And as it feels like it's just happened in a day. It's just so much has changed so quickly. I'm really, I'm really curious to hear about an fatopian boy about your, what's happened in the kind of legal research you've been doing and nature rights and what's what's happened there. Well, it's been a long, long journey. I mean, starting back 100 years ago. Christopher stone wrote his seminal article on should treat self standing. Wow. It's amazing to think that was a century ago. 1977. It really was a visionary sentiment to think about nature as deserving and belonging amongst the criteria that we think about when we look at governance, making decisions about how all of us can recognize, be aware, keep track of and monitor what we're actually doing in this world. That's been the toughest struggle I think of the last century. But I'm so hopeful that we're getting somewhere now. We actually have ways of bringing trees, our local by regions into consideration when we think about all the different decisions we're making. And that's been one of the missing pieces I think from the way that institutions have been looking at this ordering this coordination that they have been set up to do. As punks, I think we recognize that the default standard behaviors and norms that institutions and organizations have been trying to coral us into can never truly be complete. Yeah, reflecting back on this last year. But like you said, reflecting back on the last 100 years, how nature has stepped forward and we've supported how we can bring nature into the conversation with actual the actual ability to listen. I mean, so no punks, you've been working on there are new listening devices. Tell me how that's going. Well, it's been tough to get that last grant. I mean, honestly, sometimes it feels like all the research grants are somewhat biased to the AI as opposed to the more than human. You know, I still, I still remember our first more than human council in 2028. Gosh, I was. I think I was in my in human years in my 30s then. You know, it's just very funny to think about how we started off how clunky that was with our, you know, using these methods of play and imagination. And there was a period of pretty wacky research where I think most people thought we were completely losing our minds. I remember one of the people from my research group going and spending two years sitting with a tree, just trying to get a sense of how how we could speak on behalf of those beings. And obviously, since then we've got the technology now to allow us to decode some of those messages and use the kind of physiological patterning of trees and plants to get a sense of what they would like us to do on their behalf. And at the beginning it was purely imagination that that was allowing us to deeply listen, not just to the more than human world but also to that part of nature that lives within ourselves. And it's just, it's really funny to think back to the concepts and the kind of ways of being and understanding that, you know, back in the in the 20s and the 30s were considered, you know, somewhat alternative or kind of avant garde. And in some cases quite, quite flaky, I guess much like the psychedelics psychedelics were considered back in the 1900s. You know, now, obviously that psychedelics are fully integrated within our education system. So just so much is so much has changed. And looking back, it gives me hope that when you're trying to do something, when you're trying to change fundamental patterns of the way we perceive and what we value and how we make sense of the world. At the beginning, you're always going to seem like someone who's, you know, maybe a little bit losing it or even on rigorous, but actually we've been developing that rigor. So things that were once on rigorous. Now, you know, we've got our protocols around into, you know, integrating the more than human into our governance and kind of bizarre to imagine institutions without a voice of nature within, you know, within the governance and boards. Anyway, I don't think I'm making much sense. I'm just kind of, I'm just freestyling here and just kind of remembering these moments. I know we wouldn't have been able to do it without the collaboration with Zen type hunks governance protocols and the way you know the way you guys have used technology to allow us to govern on kind of planetary scale. I mean, that's just been amazing to work with you. And right now, you know, like, I'm still trying to fix a few bucks. It seems like somehow the AI is and the trees are not really going along together. There is like a little bit of debate happening. And we don't manage to reach the consensus as to like the next, the next, the next regenerative agriculture strategy. The AI somehow is really pushing towards having a little bit more of bioengineering happening, whereas somehow the trees are actually pushing back on that. And so we're trying to, you know, involve a little bit extra parties into the discussion. We were thinking maybe the birds could could be called in into the into the Senate, but I don't know, like I'm trying to, I'm trying to work it out because the AI have always been on the side of the trees. It's only like since a few months that there is this discrepancy and we haven't found we haven't understood the source of the discrepancy. So I'm a little bit concerned actually have to say. Yeah, I don't know, like the, the, the governance structure is still working out, but I don't know, like, what's up, what's up with the AI is like, somehow, yeah, they got this aligned. And maybe we just need to like have them train a little bit further on the tree language. I don't know. I don't know if it's like a misunderstanding between a communication problem or whether they actually disagree. That sounds tough. Yeah, it's a bit tough. So I'm debugging. I will, I will let you know a bit more in a few days. I noticed on a different topic. I noticed they took him boy that you use the word institutions, which I thought was quite, you know, quite avant garde of you seeing as we, you know, we all decided we'd stop using that word about a couple of years ago. Yeah, I was a bit shocked as to hear that. Yeah, I, yeah, what, what were you, what did you mean there? How come you're using that word again. I know it's an anachronism. And we worked so hard to shift the dialogue away from this one sided perspective. And I think ever since the research group started working on the notion of extitutions. I think the, the, the just looking at things from that one perspective, I 100% agree it's, it's, it's very biased to be thinking just from the perspective of the these these structures we've come so far since then and we're able to, as you were referencing solar pump girl really consider these broader resonances in terms of the, the bio region, the use of IOT to actually listen in on a technical level but also the use of councils of all beings to actually fully resonate in. I think it's useful to see how far we've come by by thinking about institutions and, and that's why I went back 100 years to Christopher Steins article because I think when we used to think about the law. It was all about the way that these institutions structured things in their own image for their own goals. And it was, it felt so all encompassing and overbearing to have these disembodied goals structuring our system in ways that were so out of alignment with our core goals and yeah it just made me reminiscent thinking about goals now as we gathered here at the end of the year as we do every year. So our individual goals and how those relate into these collective extitutional goals to think about how we were so so railroaded again to use this old and a crinistic term but you know we were on tracks we didn't have the freedom we have now to really fly where we want to to soar and gather. So we have our tree friends and our AI friends into these assemblages of intent. Once upon a time. I think most people didn't really get to even think about their goals, and certainly we didn't think about the goals of nature. So, right now we have this challenge where, as Zen Tai punk has said, we're getting some misalignment again between the AI assemblages and what we're hearing from nature and the bio region around us and maybe it's worth bringing in just a little. History around how we got here from being completely dominated by institutions that structured the systems of economic and social dynamism that was leading us down a certain path and and how we veered away from that because I think in these pivotal moments when there's potential to change the environment, we need to go back to our punk roots and think about, you know, what is it about the freedom and the liberty to consider the unconventional that really drove us forward to where we are today. And I think just because like I'm really like a little bit struggling here. I don't know, you know, like I'm afraid that I'm like, I'm afraid that we're actually, we've been moving on so fast in the past 70 years. And of course, great, we really nailed down on the institutions and like, I think, of course, society is like much more fluid and dynamic and and the centralizes it used to be. I tell you, like I'm a bit concerned by those as it's as if they had like, somehow find a way to coalesce and to create like this kind of rhizomatic network, but I'm not so sure where it's going like somehow I'm wondering whether we might need perhaps to reintroduce at least a little bit of traditional scaffolding, just to make sure that the execution that we have created until now actually maintains itself without being easily corrupted by those AI's are basically like, and I'm not talking about all of them is really it's a little subset it's a little rhizome of AI's that somehow is kind of going in a strange direction. And again, like, maybe it's about maybe it's just my fault I did something wrong with them. Maybe they don't fuck, they don't fully understand what's going on around but there is something wrong there. And I'm wondering, like, shall we not like, is it is it okay like are we not just doing the same mistake that our former society have done which was just like focus only and exclusively on institution for getting everything about execution. And like, maybe we have overreacted and we have like we because we saw all the, all the negativity and all the robots of institutions, we just decided to completely reject them. But sometimes I have to tell you I'm very tempted to codify a little bit of rules and roles into these fucking AI's. I haven't done it yet because of course I'm a little bit afraid of retaliation somehow. But I have to tell you like, I'm not so sure what's going on. And I feel like there are at least at least a few of those extitutional AI's, which are kind of going wrong. And I don't know if the trees and the birds will be able to calm them down. I don't know, like, I'm really confused. Can you, can you give me some suggestion? What should I do? XenTai, I'm so glad you brought this to our end of year trio because maybe we could help think about what your goals for this next, for 2078 could look like because I am hearing what you're saying about the kind of the worries and the concerns that you've got about the rogue AI risings. But, you know, we worked so hard to throw off that institutional baggage and kind of programming and, you know, just thinking about those decades of struggle where we really had to kind of introspect into our own psyches and look at, you know, where is kind of institutional hierarchical colonization that happened internally, and how to kind of deprogram that. So I have to say it does make me a little bit nervous when I hear of the urge to bring back some of that codification. And at the same time, I can imagine, you know, maybe you're right, maybe it is dangerous to have removed so much of those kind of guide rails and roles and the ability to, you know, the checks and balances which which were so important. I mean, Fatotian boy, what do you think? This takes me back to the early days of the data demons. I mean, those were really the first AI agents that we were able to, in a sense, reclaim back from the institutional overlords that were previously orchestrating these AI risings into manipulative events that were, as I was sort of referencing, railroading us into their institutional goals without any real awareness or attentiveness or care towards our individual or our bio regional health and well being. And in the late 2020s, early 2030s when we started to introduce these data demons that were institutionally programmed and codified to work on our behalf and work on behalf of our local communities and our local bio regions. We felt so hopeful to be able to actually take back power over this entire weaponization of AI back into our local communities back here in Fatopia we were able to fully gain a handle over the local economic and social interrelations based upon the data that was being generated from our community and we knew that that wasn't going off into some larger institutional mechanic that was ultimately going to be extractive and remove our agency from the decisions that we were taking on a day to day basis. And so these data demons were really that first step in giving us back. Assemblance of alignment with with AI and we've had since then 40 years or more of real collaboration and coordination with all of these technologies of radical collaboration that of which AI was a was just a part but we were able to use anti punk you were so key to this we were able to use blockchain and distributed ledgers to really create these, these rhizomes that took into account our needs and our intentions. And in thinking now that they're going out of alignment once again it's, it's concerning and maybe we didn't fully exercise those extractive institutional intentions and tendencies, and maybe they're coming back in once again, or maybe I'm just biased against that oppressive regime that we we were able to use that of, but you know, you know what I think is like I mean we've seen it, we've seen it like in the past we've seen how, you know, we developed those a is that, you know, really supported us and, you know, and facilitated our task and, you know, enabled us to assess so much more information that we will have been able to do by human. And that was like this kind of symbiotic relationship for a while, and then somehow the AI want to far and and kind of like overcome our own interest for its own. And you know, like, I was, I remember like how happy I was like the first, when we have the first communication between the AI and the trees. And I was like, wow, this is really, this is really it like we really figured it out we never have to worry about governance again because obviously the wisdom of the trees and the plants will really, will really help us actually understand how to govern ourselves and the fact that AI can actually speak to those creatures is just so revolutionary and and and seeing the AI is learn and develop themselves with the insights of the trees it was, I mean, it was like I think the best years of my life. You know, it's been now like, it's been about like 50 years that that they are being trained on on three insights and wisdom. And somehow I'm starting to see it's very early it's really early so I might be wrong, I don't know, but I'm starting to see the AI kind of moving much, like, taking on their own route and just like going further to, to what the plants will ever go you know and it's always this is this extremism. And, and I think the AI just has this tendency of learning and excelling and excelling to the point that it reached such an extreme that just blow up. And so I'm just wondering whether, whether the rhizomatic mechanism is which is this and bulbing and developing and re bulbing, and which is working so well for learning but didn't we reach a point in which the learning is becoming exponential. And it's not learning anymore from the trees it's learning from itself. And perhaps this is where we might need some, some scaffolding and some constraints, just so that the AI stays in line with the trees, as opposed to getting into its own loop. And it's kind of what I'm thinking I'm wondering like it's like, it's very I understand completely your concern like I don't want myself to even think about bringing back some institutions here. But, but I'm just wondering like of course an AI is not a tree you know we cannot pretend to understand each other they can communicate with each other but they are not the same. So, do we need to do we need to just let them and see where it goes, or do we want to put some, and the point is that I would like the trees to answer that question. But of course the AI will not will not communicate we're not translate the the answer if the answer was that we had to add this scaffolding to them you know and so can we really trust the AI to communicate to the trees and to tell us the true. Like I feel like we're kind of, we're kind of like in this real situation because we cannot speak to the tree by yourself we need to go via the AI, but if they were to disagree with the AI, will the AI tell us I don't know. So, Phillip and girl, maybe the mycelium can help. I know you've been working for a long time on mycelial organization, half a century even. Is there a role, do you think for for support from the mycelium. I think the interfaces could actually come in quite helpful here. And, you know, as we know they facilitate the communication, you know they're such loyal servants in a sense I mean symbiotic servants to the trees and I, you know, latest, latest research shows that the relations between the the AI's and the mycelium are quite good you know they seem to, they seem to cognate in similar ways. So I do, I do feel like that's a really good point you know maybe this is, maybe this is a place to bring in those mycelial APIs and and allow, you know, maybe there needs to be just like, you know, we learn early on in in school, you know in conflict resolution maybe there needs to be a third party, maybe there needs to be a mycelial actor to help facilitate what sounds like it could become quite a tricky thing to do in the future, you know, in communication. But I also was as you are speaking, Zen Tai, you know, I have been engaging in in somewhat esoteric thoughts recently about the role of humans in in these interactions I mean, you know, maybe it's a little bit, a little bit of a, you know, out there but, you know, we've we've so we've so significantly removed the role of the human being in these governance decisions, and really kind of, yeah just given over the power to to these more than human beings and agents. And you know, is there a role for for the human being in these in these, you know, discussion. I think I would be even more concerned to read through these the humans and I would be concerned to read through these institutions. You know, and I wonder like, is it, is it like he's always this thing we never we never know is like things from the past and can no longer be tested but I always have these like weird debates, you know, with, with the various, various creatures around about whether is it like, is it the institution that corrupted the humans, or is it the humans that corrupted institutions and we never know which we decided to get rid of both of them. And that's at least in our governance structures. But, you know, I think it's very risky to, to like, yeah, I don't know, it's so hard like, I really wonder do you think like if we if we start adding institutions, again, we might actually corrupt the trees as well I don't know that would be like the because then we really only have the AI to rely upon. Or is it that maybe now that we actually got rid of the institution we could slowly reintroduce humans into the governance structures. This is like, this is a billion billion bitcoins question. Hmm. Yeah, it's tough, tough decisions. It's like, it's a very high risk thing and like, is the gain worth the risk I really don't know I'm very curious. Maybe we could do like a very small experiment and just reintroduce them into like a very small subset. Yeah, so what you're saying basically if I understand correct is that you will rather have us incorporate some human into the debate than having, having us ring, ring, produce some institutional scaffolding. Just human human beings no institutions no roles, no, you know, none of that just human beings and there have been some early experiments in New Zealand that have been showing some some promising results I know it's a completely different situation there because they were so far ahead of the rest of the world even back in the 30s and 30s but and it might be the influence of the Maori people there. That could be stuff to learn that you know not all, not all humans have been, have been behaving the same way through all through all of this. So really great point I think the, the way that they were back then able to reduce the notion of the rhizome as a sort of interface for how humans can work with nature. And so that it wasn't humans acting as humans but really humans being a part of this larger assemblage of humans and nature and data driven AI. And so that their data commons. I think it was established in the early 2020s that was really influential in bringing school children and nonprofits civic organizations and the Maori tribal organizations together into this rhizomatic assemblage of intention. And so that kind of configuration if we can hop back to that sort of approach to bringing the human voice back into governance. Maybe we can have a sort of a more mycelial representation of humanity in this dialogue and conversation between AI and nature and so once upon a time we were looking to nature or AI to help us mediate our relations with other non human actors. Maybe the time has come for humans to adopt that mediator function again, but not as the prime mover but much more in that Maori tradition of being one part of a larger assemblage. So I have to tell you I have a little bit of a version to that. Maybe we can think maybe instead of humans because like, you know the problem of humans and their identities and their status and all this stuff is like, I think like the problem is that the event that we got rid of we didn't fully get rid of identities and egos. And so, you know, we are dynamics my magic and maybe we should have had an I don't want to sound self centered here of course, but maybe instead of humans we should actually only allow sentai to to join into the governance because at least we just eliminate all these egos and trick identity layer of humanity. And, and we can just focus on the actual level playing field of ideas. So how do you feel about that. It's been a while since we, we thought about the zentai characteristics that elevate humanity into this plane of fairness and equality. So let's re re re go over that again for a sentai pump as to why that is unique because I remember but I just want a reintroduction if that's okay. Of course, you know, like, I mean, basically we have we have seen it over the past 1000s years is like, humans somehow they, they just have this like, and we don't know again we don't know if it's inherent or what but they still have it like, whenever they can see things that can distinguish themselves from others than they grab it as a construction of status is like this obsession that humans have about building status which is this most abstract and meaningless thing that has ever been created by society. And yet they just grab it and whenever they see something they just grab it in order to identify themselves as different from others. This is very egocentric thing and, and I think, you know, our society of course has contributed a lot to removing this ego but it's still, it's still, you know, it's an evolutionary trade that needs to delete itself over time. And so we design tie of course we're just trying to help evolution. And while we as humans still have distinctive characteristics then we just try to cover them we cover them we try to, we try to hide all our distinctiveness so that we all become the same and so that event or event or terry still the ego we hide the ego we suffocate the ego ego and other suits. And again, everyone then become the same and there is this field of equality. And this is kind of like, we're trying culturally we're trying to use those materiality and those tools in order to help the mind get let go of all those intrinsic desires of egocentricity. And it has been working very well I mean we see how the human society has really evolved ever since we recover ourselves with the sentai suits. So I'm just wondering like, I think that at least the humans that have been training these muscles you know it's kind of like, it's like in the past we had like the monks, you know the the monks that were training their mind at letting off the ego. And now we just have one extra tool which is like like a physical and material tool in order to further let go of the ego. So I think like the sentai have really been training themselves a lot and maybe they are more likely to be less likely to be corrupted with again this egocentric desire of power and stages as soon as we incorporate them into a governance horizon. I'm so glad you brought that all back. It's so easy to forget sometimes that deliberation and politics was seen as some sort of dirty concept, and that technology could fix everything. And it's thanks to the sentai and the punks in general, solar punks and sentai punks that we were able to recognize how important politics and power actually were to bringing us here today and the deliberation that was liberated from the bane of status and role, the deliberation that could truly be based on ideas for what they were worth in and of themselves. So we used to forget how important that was and I think you're right, sentai punks, we should bring that back again. That's really important. Yeah, I think the sentai delegate councils are probably the best, you know, the best I've seen of truly deliberative and eagoless decision making on behalf of all beings. I won't go into the, I don't think we've got time to talk about the kind of risks of anonymity because, you know, corruption is still amongst us, it's still real even after all these years of decades of trying to push that out of our society. But, you know, I think if anyone can, if we can trust anyone, it's probably the sentais with their training. We've only got a couple more minutes left before we've got to go for a bacterial council. Wondering how we want to close out this meeting as we look forward to the next year. Maybe you could give us a little heads up on the councils that we have upcoming. Moral imagination has been such a critical part of our ability to engage in councils. I'm excited to think about the ways in which you have been gathering these councils so far and maybe you could just give us a little heads up on what's coming up next with these different councils that encompass moral imagination. Yeah, I mean the next few days are quite packed, but obviously the start of the year is a really important time for these councils and I'm particularly excited about the Lycan council. And that's really bringing our imagination in a whole new kind of symbiotic form of fungi and algae. So my interest, you know, back in the 20s and 30s was far wider in terms of councils of all beings and really just including all beings from the night sky to the mountains, to trees, to dolphins. And more, but I think now, you know, I'm becoming more and more interested in those symbiotic councils that really allow us to deeply embody and understand symbiosis as an organizing force. And, you know, that's what makes me hopeful about the AI and tree interface if we can really learn from the pattern of symbiosis. I think there's something there and if we use our moral imagination to better understand that and take that on and maybe we can be programming more of that into our AIs. But yeah, we have the water being council coming up on the 2nd of January, welcoming all beings from watery worlds. We've got the Aether council where we're bringing together, you know, we've got the AI council as well. You know, the first moral imagination into AI cognition was definitely a turning point last year. And then obviously we've got the kind of the big global council. So, you know, do make sure that you mark your calendars for that. That's every year on the 10th of January. So that's that's it. That's that's the kind of lineup for at least the next few weeks. So, it's ever ever gratifying to know that you're there, so the punk girl and that you're there as an anti punk to bring hope and convene these trusted circles. And I think as our forebear Ursula Le Guin would say, they are the minds indispensable relationship with other minds with the world and with time. And so, thank you. Thank you for maintaining this, this hope and this trust through these councils and through ever working on this punk governance of ours. Thanks, fatopian boy. Do you want to ring us out with your wonderful bell? Master of ceremonies, I'll hand over to you. Until we gather again, the next council, solar punk girl, then type punk friends in the wider global council. Thank you.
An open exploration of “solarpunk governance” - what does it mean, what could it look like, what is governance by and for solarpunks? Weaving together threads of nature, law, technology, self-organisation, imagination and more, we will dive into an exploration together!
10.5446/52018 (DOI)
Welcome back to placemaking the solar punk. With a slight delay, I give a big welcome to our speakers for this session, LuYen and Xavi. Grassroot movements, transforming politics is the title of our conversation now. Last year has been a year full of movements, but there's also a lot of experience from movements before. I look very much forward to hear this juicy conversation, but how about you guys have a quick introduction of who you are and then I would like to ask Xavi to give a short presentation. LuYen, you want to start? Yeah. Hi, Xavi, it's nice to meet you. So my name is LuYen and I'm based in Kotsdam, which is close to Berlin. And I'm a free candidate for the next federal elections, which are happening next year in September. So I'm running without a party. My background is in Extinction Rebellion and Greenpeace. But yeah, I think due to my year in the climate movement, I'm really inspired to use this energy and bring it into the next elections. The stakes are really high in Germany. I mean, like everywhere, if we don't switch towards 1.5 degree limit in the next legislative period, the ship has say it. So it is my main mission to bring this objective into federal politics. Because so far what we've seen in Germany is that all our big parties, they are not committing to hard objectives enough, which are sufficient. I was talking about climate neutrality in 2050, but that's not enough, as we all know. So as a free candidate, I can want to build new coalitions in this electoral area, which has roughly 200,000 people. I need 50,000 votes to win the direct mandate. And so that's like a quantitative challenge. But my slogan, you can see left on the right side of me is called Einfachmachen. It means more or less just do it or make it easy. Like it's a wordplay or a double meaning. It means that I would like to offer people easy ways to engage with the classical politics, especially people who are kind of turned off or disappointed by regular party politics who find it too hierarchical and too patriarchical and so on. And the machen, the doing, I think is really important. And I learned this during extinction rebay. And I think there are so many solutions and great initiatives and grassroots energy there. We just have to make it bigger and scale it up and bring more energy to it. And we don't have to reinvent the wheel. So the assumption is everything is there. We need to find out where the resources and the community bring it together and create this movement during the next year and to kind of make a statement, but also win the election. Yeah, so over to you, I would say, except if you have questions you need for your background maybe, then let me know. Do you have a delay? I think Jakob got it from them. Yes, Javi, please just go ahead and great intro, Luyen. After this great and inspiring intro, I think mine will not be that great. I'm Javi Ferrer from Barcelona. I am a founder member of Barcelona and Comú, which is what I am explaining afterwards. And my background is basically like grassroots movements, squats. My framework is the Zapatista movement, World Social Forum, anti-globalization. And a couple of years ago, after being involved in the housing movement, anti-victions, all this stuff, we decided in Barcelona together with many other activists to jump into formal politics, into official politics and run for office. And we won. And this is the crazy thing. And now I'm working, I was working a couple of years for the organization that won the elections for Barcelona and Comú. And now I'm working for the city of Amsterdam, trying to do something similar and to creating some municipalist movement in Europe. That's super impressive. You have been an inspiration with Barcelona and Comú. I also saw the documentary about Ada Kulao and the whole thing. It's very inspiring. So you want me to go on with the presentation that I prepared, Jakob, or how you want to do it? Yes, please. I'm going to share the screen. I prepared some presentation to make it a bit more, because I don't like, at least I'm very tired with this looking at faces all the time. So I did this. I think like this. Okay. So first, I'm going to explain the framework in which we, or the context in which we decided to do something this crazy, because for us, for people like me who always hated and still hate, if I'm honest, parties and formal politics, it was a crazy leap to do. 2007, we already know this big economic crisis. 2009 in Spain, I think it was pretty different in Northern Europe, but in Southern Europe, it started to be pretty crazy. This social crisis, and with social crisis, I mean like crazy evictions and a lot of violence perpetrated by the state, by the institution. Some people who killed themselves when they were about to be evicted. And we started to see something crazy, like people who never got together or who never fight started to do it. I like this picture because it makes people from different colors, which is something that it was, it never happened before in Spain. And in 2019, we had our Occupy, which was called 15M. And this was very important because this changed the, this opened the debate of whether what we had was an actual democracy. And we had these like big mobilizations and all this stuff. And this created a shift. And then in 2014, we said, hey, next year, we have election, local elections in Barcelona and also national elections. So we thought, maybe it's the right moment to think about something like this. We decided to launch this idea and we won the elections. We decided to run for office and we won as I said earlier. Just one patient to show this. And the night when we had the results. And I'm going to explain a little bit very quickly because I think it's more about a conversation, but the process in which we created Barcelona and Comú. We launched the idea of like, okay, we never thought about mixing our energies. And when they say our, I mean, people from the street, social movements and GOs with political parties. But we thought that maybe that that was the right time. So we launched this idea. Why don't we create something new that gathers all the energies? We said very explicitly, we don't want to create another lefty party that sums to the infinite list of leftist small parties. So we want to gather all these energies. And we said, do our people interested in this? And we said, if we collect 30,000 signatures, then this was the first step. If we collect 30,000 signatures in three months, then we'll go on. Otherwise, we'll be back to our housing, environmentalists, feminists, or whatever movements. And we did, we collected them. So we said, okay, let's go. We invited all kind of parties to join all kind of organizations and also all kind of people. This was the key factor. If we were able to engage people who were never in politics. So what we did the first, when once we decided to go on was to create this cause of ethics, which maybe in Germany or in other places, not that important, but in Spain, there was, there is still is a lot of corruption and this all these things. So it was very important to say, okay, the way we are doing politics, it's not only about the ideas or the goals, but the way we're doing this is going to be different. Our agendas are going to be public, we are going to have a maximum wage. So there's some sort of things that we're going to be very differently so that we are not working for the big lovies, big corporations or the big powers, but we are working in a very open source way. So in the hackers movement, maybe it's a very common thing, but definitely in the party politics, no, at all. So this was the first thing we did. The second thing was a participatory program. So we basically created our online platform where everyone was able to you know, to promote, to promote, to, I don't find the words, to propose an idea, something to do in the city, could be something a big idea or something very specific in your neighborhood or whatever. And we did also some physical meetings because there's people who are not very used to work online. And we put all that information together in this online platform and then we were like putting together different ideas like adding and there was, it's a long process, there's no time now to explain this. Once we had the program, so the what after the how, then we had the what. And then it's like, okay, how are we gonna do this? Like technically, not logistically, we need money, we need some people working on this. So we were like also talking like we don't want money for big organizations, we're gonna make public all our numbers and blah, blah. And this is what we did next. Oh, sorry. Yeah. And then what we did the last was there. So I had a problem, can you see properly the, yeah. Yes, we can see. Yes. Okay. Because they had something. And the last thing, and we did it in this order because we thought it was if we would have started thinking who would running because then that would be very difficult. We just started with how we're gonna do it and what we're gonna do it and then who, who is not the most important thing and always create some tensions because there's ego, there's people who really like to be in some concrete positions. So this is the last thing we did. And, and yeah, so then we started the campaign and it was very good because we won. I compiled here some ideas of what I think would like key factors of the success, but it's too long. Maybe we can go through this later in the conversation. But I would like to focus on three points that defines what municipalism the approach because as I said earlier, we had national elections and local elections and we had a long debate for three months, we were debating what to do. And we decidedly finally to, to run for local elections based on, based to be honest, also in logistics and practical reasons, but also thinking that the state level is much more difficult to get in. There's the big powers are more powerful there and it's much easier to gather people and to understand each other and to talk about concrete problems locally. So we, we embraced this municipalist strategy, which we summarized in three points. Local focus. So we are working locally with people with the neighborhoods, which allows much more normal people to get involved. Then the idea of conference, which is pretty critical, at least in Spain it was, we wouldn't, we wanted to put together parties and geo social movements, artists, and scorers and people from the academic, some people who are usually don't like each other or you sometimes hate each other. Like, let's put the focus on the things that we want to do. And let's let ourselves, yeah, know each other and collaborate with each other. And then the third point, which is very important is about the network because this idea of local can be interpreted in a very nationalist way. Like it's about us. And it's definitely not about us. We are trying to solve global problems. We need the network when we did this in Barcelona, Madrid, Coruña, Valencia, many other cities in Spain did something similar. And some of them also won the elections. But much more than Spain is the weird talking about solving problems that are like historical patriarchy, exploitation, growth, all these things that are global. So when we focus on the local is not because we forget this, but because the way we create this big power that we need to change these things is focused on the local, but we keep the global approach. And then very quickly, I want to talk about two projects that we are working on. One of them is the European Municipalist Network. It's a project. Actually, the name, we are not a network or we are creating this network now. And there's four activities that we're doing. We are mapping local initiatives. There's a group of feminization of politics thinking how to feminize the way we treat each other, the way we think and collaborate, the way we do politics. Another activity is dissemination. So communication, but also writing in general about we are working with people from the academy, people from journalists and others. And then the municipal school, which this event, this talk could be included in this. We want to spread, we want to share our experiences, not from different municipal organizations in Europe. So this one project, and this is going on. Everyone is invited. It's a totally open project. We have a lot of people who are working with us. Everyone is invited. It's a totally open project. And then the other one is a forum that we are organizing in May next year. And we take a little bit the soul of the World Social Forum, maybe some of you know about it. But we want to do this in a decentralized way. So we want to take most likely April and May, two months for as many cities, towns and villages in Europe to organize a two, three days conference for thinking how is the city or the village or the town of the future that we want. So which is the transition that we need to start to walk through and which is the vision that we have of this future. And we want to do it as you can see here, without owner, in a very open source way, we want to frame it, of course, with the COVID thing, with the environmentalist movements that rise up in the last years. And pretty focused on this idea of transition, no, different transitions, because it's towards sustainability, towards the colonizing, the colonizing, towards feminization, and towards the digitalization, and maybe some others. And one important thing, and I think I can finish with this, is the way we want to work. We want to do politics in a different way. So it's not about your personal ego, but also your organizational ego, because sometimes like, I want the brand of my, the logo of my organization to be in the center. But okay, let's put this away. And let's really some forces. So yeah, everyone is invited to contact us and to organize an event in their municipality to do this. We'll also use the CD, which is an online platform in which each of the groups will use it for organizing their own event, but also will use it to federate the results of the different events, the different local forums. There's much more to explain, but I don't want to make it longer. This is a bit of a joke, of course, but since the, in Germany in Leipzig this year, and you, Lu, Lu is your name, or Lu-jen, I don't know, Lu-jen. You're thinking about this and organizing this for post-dem, I was making this joke, but in general, of course I'm doing this because Jakob contact me, and I think this is what we have to do, spread and collaborate and everything. I'm a very internationalist, but I'm also doing this, collaborating with this because we need you. And when I say you, now I was focusing a bit in Northern Europe. I think in something Europe there are many things happening also because of the crisis is more obvious, but the centrality and the energy that you have, the power that you use the right word that you have, if we would have in Leipzig, Postam, Berlin, or Munich, or Amsterdam, or one of the big important cities in Northern Europe, if we would have a mayor, as we have in Barcelona, that said two weeks ago, please, citizens of Barcelona, don't buy in Amazon. It's killing our shopkeepers, it's killing our economy and our city. If this would be said by the mayor of Berlin, this would be like, you know, the mayor of Berlin has the telephone of the mayor of New York, Beijing, or whatever. So yeah, it's also an invitation for you all to do this. Yeah, and maybe I think I did it a bit longer, sorry. Amazing storytelling, Shavi. And then, Luyen, please, any questions? This is your time, yeah. So I want to pick up your initiative, Shavi, and maybe we can talk about, you know, just play through how would that look like a local transition forum in Postam. Yeah, so I think one big challenge which you always have is like, okay, you need a digital infrastructure, how to get people on there, what kind of infrastructure is there, how can people, you know, what do you give them, what is existing, what kind of templates or guidelines or other things, resources do you provide, and what needs to come from the local app? We are doing this in, for some people, this would be a weakness. For me, this is a strength in an extremely open way. So we are, we still don't have a governance for this project, you know, the more people join, the more, you know, okay, let's take decisions together. And it's the same with this idea of the template of how a local forum could look like. We are working on this. It's an open document. We can share this with you or whoever would be interested. So the idea which we want is to, we are actually asking people what they need locally. We are, in some cities that we know more people, we are trying to put people together, like people, what we think is that the main, and that's why I explained at the beginning, the context in which we decided to do this in Barcelona, because without them, I don't find the word, without the broth, you can't do that. You need a strong social civil society. If you don't have this, and this where everything has to be based on. So we are inviting people, I'm making now this example with Budapest, because we have this called a couple of days ago. All the people that we know from Budapest from different movements, we made a call with all of them, say, hey, do you want all together without any fight, without anything? We don't have to agree on everything. But we have the main idea. So let's organize something in which we can debate this, and we can bring these debates more mainstream. So, and what do you need? And we are now working with the CDM trying to set this platform, because this is something that people said that they would need. But in general, for example, now we are also thinking about communication. We are thinking that each local group would have their own communication and their freedom to set the program in the way they would like. But also, it would be interesting to have a common communication with a common name, so that it's clear that we are part of the, of something together. Oh, someone said, I guess, it's three minutes. So, yeah, I don't know whether I answered your question. What we ask in one sentence would be openness, generosity, and actual will to do something that is useful for the movement in a broad way. And what we offer is, okay, we are nine people in Amsterdam working on this. We have a strong network, strong, big network at least of people here and there. And we are inviting all of them, all kind of, I don't know, think tanks. We are talking with Extinction Rebellion UK, for example, these are one of the guys who will be involved, definitely. I don't know. Yeah. Okay. So, yeah, maybe then it makes sense if we talk after this call, because this will be over in a couple of minutes. Maybe just want to give you some feedback about, yeah, or maybe just a last question on Barcelona and Comú, actually. You said you had a platform there to do the participatory program. What kind of platform was that? I don't remember. I can, but I can share, I can, I can search that information because the link, the website must be still there. But I can also ask and put you maybe in touch with the right people or whatever. I will do that. No problem. Yeah. Okay. Thank you. And just about the timelines when you were presenting, like, can you remember from when to, like, before you started the campaign, how much time went into the preparation process of, from signatures to list? From signatures to list. And since we got the signatures until the list was something like eight, nine months. Okay. Something like that. Because right now we are nine months away from the elections. So we need to move really, really fast now. And I think maybe a difference to Barcelona is that I'm running for the federal parliament. There's kind of a mixture, like you want the people from one electoral area to vote for one direct candidate and then bring it into the federal government. So there is this link between national politics and local politics, I think, like, I need to break down the global problems into local tangible projects and then identify where is the national challenge here. So that can, you know, be a transmitter between the worlds. Yeah. But yeah, these talks always go by too fast. I feel you barely scratch the surface. But it was nice to meet you. And yeah, we can pick it up later on.
grassrootsmovements transforming politics a dialogue between Xavi Ferrer from Fearless Cities / Barcelona en Comu and Lu Yen Roloff, Einfach Machen!
10.5446/52019 (DOI)
Alright, our next talk is Corey Doctorow, who I think needs very little introduction in this situation. But for those who don't know him, he's an activist, he's a science fiction author, and I think he can be described as the king of bloggers. I remember him appearing on XKCD with a cape back in the good old days. So Corey, please take it away. I don't know how I feel about being a king, given that I'm wearing a guillotine badge, perhaps like party secretary, you have my permission to take my picture. So I'm going to talk today about technology and optimism and where it comes from and where it needs to go. And I want to start by, well, I want to start by putting on my slides. So let's do that. I want to start by busting a myth, the myth of the blind techno-optimist. You've probably encountered this story, the story that, you know, once upon a time, there were a bunch of nerds who had discovered the internet and thought that if we just gave everyone the internet, everything would be fine and that the only thing that they needed to work on was making sure everyone got connected and everything else would take care of itself. And now those idiots have led us into this crazy, terrible dystopian world. And why didn't they foresee all this trouble? And the way that you know that this blind technological optimism is a myth is that people don't go out and start organizations like the Electronic Frontier Foundation because they think that everything is going to be fine in the end. You know, if there's a motto that characterizes those early technological optimists, it's not that everything is going to be great. It's that everything will be great if we don't screw it up. And if we do screw it up, it's going to be really, really terrible. Now, before computing was the source of regular stock bubbles, it was just a passion. It was driven not by dreams of riches, but by programmers who were able to make stuff happen. If you think about what the journey of a programmer is, it's that first you figure out how to express your will with sufficient precision that your computer then enacts your will and enacts it tirelessly, perfectly. Computer is connected to a network. You can project your will around the world. You can take the thing that you've built, this self-executing recipe, and you can give it to someone else, and they can execute it as well. But it's better than a recipe, right? You might have a recipe for your grandmother's brownies. And when you send it to someone else, they still have to follow the recipe. But a program is like a self-executing recipe. It's like a machine that just makes your grandmother's brownies appear in every household in the world if they just download your code and run it. And of course, as you get on the network and you find these people to share your code with, you're finding the people as well. You're finding community. But not in that large scale, to be honest. Tried around quite a few things and had a little online session for the Tower Day and for all these things. We had the Easter hack, or the Divock. And yeah, we're working on it to get the connection back. So yeah. It's a giant, fat, foambook-sized... Okay, I can hear the voice....that they would get every month for all the things that they did. And so they ran this whole system in the shadows. And the last thing they wanted was for what they were doing to come to the attention of their bosses. And so they had a whole bunch of rules about what you could and couldn't create. They especially didn't want any sex and they didn't want anyone explaining how to make bombs on use that. And so there would be votes about what new news group could be created. But every now and again, the backbone, Kaval, which is what they called themselves... Okay, so we do have a sound of this doctor over here. So I think it shouldn't be too much of a time until we get the stream back. We should go under the talk hierarchy and not under the wreck hierarchy. And John Gilmore and other people who were in his company decided to set up their own alternative version of Usenet called the alt hierarchy specifically to allow for a discussion of cooking wherever the hell they wanted it. To exercise that little quantum of self-determination. And very quickly, the alt hierarchy grew until it was larger than all of Usenet put together. So, the worst nightmares of those early digital rights activists have come to pass. We have total penetration of technology. There is centralization, surveillance and manipulation of all of our technology everywhere. And the question that I think is a valid one to ask is how were we dealt such a stinging defeat? How did it come to pass that people who foresaw this danger and worked to make things great and not screw them up still arrived at this moment where the Internet consists of five giant websites filled with screenshots of text from the other four? That's a phrase from Tom Eastman, a software developer in New Zealand. And this is where I get to my thesis about what's just happened and what needs to happen next because there is a story about a technologist that says that the blind spot was dystopia. The technologist just failed to understand that all of this stuff could go horribly wrong. And they really understood how, whoops, they really understood how wrong it could go. The thing that technologists failed to understand was the relationship of monopolism to technology and the economy as it was emerging in the early days of the technology revolution. So if you think about the early days of the commercial Internet and commercial technology, personal computing and so on, it was very dynamic. Companies that were giants one day ended up being acquired by upstarts the next day. And that dynamism was not driven solely by technology, but also by US antitrust or anti-monopoly enforcers. So I want you to think about what the experience of a kid in the United States in the 1980s would have been like if you were using technology. So you might have gotten your Apple II Plus in, say, 1980 or 1981. In 1982, the modem that came with it could suddenly dial all kinds of services all around America at a fraction of the cost that it used to run at because AT&T had been broken up and long distance charges fell through the floor. And then in 1984, you might have replaced that Apple II Plus with an IBM PC, but it's more likely that you might have replaced it with an IBM PC clone, whichever one you replaced it with. It was probably running an operating system from this guy, the guy who wrote this letter, Bill Gates, the guy who started this tiny little company called Microsoft. And the reason that the IBM PC was running code from this little startup and not from IBM itself was not because IBM didn't know how to write code. IBM was really good at writing code. They were arguably too good at writing code. And for 12 years prior to the creation of the PC, IBM had been an antitrust hell with the Department of Justice in which they were sued and sued and sued. And every year of that 12 year lawsuit, IBM spent more on its lawyers than the entire US Department of Justice spent on all the lawyers pursuing all antitrust action. One of the things that the Department of Justice was really adamant about was that if you made hardware, you shouldn't try to monopolize the software for it. And so even though eventually IBM prevailed, the case was dropped against it, the last thing they wanted was to get in trouble with the DOJ again. And so after this 12 year process, when they made their first PC, they decided not to try and make the operating system for it. Instead, they tapped Bill Gates to make an operating system for it. And then Tom Jennings, the guy who created FidoNet, which was the biggest competitor we had to use that. It was a non-internet based distributed message board system. Tom Jennings, who is a virtuoso hardware engineer who lives a few kilometers from my home here in Los Angeles, he was tapped by a company called Phoenix that asked him to reverse engineer IBM's ROMs. And he reverse engineered the PC ROM, produced a specification that was used as the basis for a new clone ROM. And that clone ROM was sold to PC vendors all over the world. And it's how we got Gateway, Dell, Compac, and all of the other PC vendors that might have sold you that IBM PC clone in 1984 running an operating system that IBM hadn't made on phone lines that had been broken up from AT&T. And then in 1992, you might have noticed that that little company Microsoft had grown to be a monopolist itself with 95% of the operating system market. And so in came the Department of Justice. The Department of Justice spent the next seven years dragging Microsoft up and down that same gravel road that they had dragged IBM up and down for 12 years. And even though Microsoft got away the way IBM had, their behavior was tamed too. Because when a couple of guys in a Stanford lab, Larry and Sergey, named their new search engine after the largest number they could think of, a one followed by 100 zeros at Google, Microsoft decided not to do to them what they had done to Netscape, because they had seen what the Department of Justice does to you if you do that to your nascent competitors. And so it felt in those days, like maybe we'd found some kind of perfect market, a market where you could make your products with low capital, just with the sweat of your own mind by writing code, that you could access the global audience of everyone who might want to run that code over a low cost universal network, and that that audience could switch to your product at a very low cost, because you could always write the code that it would take to port the old data formats and to connect the old services to your new product. It was a market where the best ideas would turn into companies that would find customers and change the world. But what we didn't realize, what we were naive about in those Halcyon days of the early internet was that anti monopoly law, the anti monopoly law that had made things so robust and dynamic that had given everyone who had access to a computer a chance to try and make a dent in the universe, that that anti law antitrust law had been shot in the guts in 1982, and was bleeding out. That's all thanks to this guy, Robert Bork. Robert Bork is kind of an obscure figure for most people these days, although I said that on Twitter the other day and a bunch of people in their fifties said, well, I know who Robert Bork is, but I think if you're not an American in your fifties or a certain kind of weirdo, conservative activist, you've probably never heard of this guy. Robert Bork was Richard Nixon's solicitor general, and he committed crimes for Richard Nixon. And they were so egregious that when Ronald Reagan tried to appoint him to the US Supreme Court, the Senate decided not to confirm him because he was too grimy for the US Supreme Court. And so instead, he became a kind of court sorcerer to Ronald Reagan, and he created a new theory about when monopoly laws should be enforced, a theory he called the consumer harm theory. The consumer harm theory says that we don't hate monopolies because monopolies are bad. We only hate monopolies because they sometimes raise prices. And so long as a company that has a monopoly isn't immediately raising prices after they acquire that monopoly, it's okay to let the monopoly form and to let the monopoly fester. And this idea was incredibly popular. And not just with Ronald Reagan, every one of the neoliberal leaders of the Reagan era, from Helmut Kohl to Margaret Thatcher to Brian Mulroney to Augusto Pinochet took up the ideas of Robert Bork and said that from now on, we are not going to get rid of our monopolies. From now on, we're going to encourage the growth of monopolies on the grounds that they are efficient and only shut them down if we can prove that they've used their monopoly to raise prices. Now this idea is a stupid idea, but it's incredibly popular for rich people because rich people like the idea that they could buy shares in companies that could establish monopolies. And those rich people funded Robert Bork. They created among other things a series of junkets called the MAN seminars, MANNE. The MAN seminars are continuing educational seminars for US federal judges in Florida where you fly to Florida, stay in a luxury hotel and get lectured on the brilliance of Robert Bork. 40% of US federal judges have been through the MAN seminars. Unsurprisingly, those judges are far less likely to punish monopolistic conduct. The people who like Robert Bork funded law schools and economics departments and journals. And they turned the idea of consumer harm into a kind of global doctrine that has now taken over every single regulator in the West. China has a slightly different vision of it as does Russia, but the European Union, Canada, the US, most essential in South America have all adopted these rules. I don't know to what extent these rules have penetrated the African markets. And consumer harm, this idea that monopoly should only be shut down if you can show that they're using monopolism to raise prices, is incredibly hard to prove. In fact, you could basically call it impossible to prove. And as a result, anti-competitive conduct became so routine that we no longer think of it as unusual. Until the Bork era, here are some of the things that were considered violations of antitrust law and that would have attracted scrutiny from a regulator. Merging with a major competitor, acquiring a small competitor, or creating a vertical monopoly where you own different parts of the supply chain, like Google buying an ad tech company. Now, the story of how tech got monopolized leans hard not on Robert Bork, but on all these exotic ideas like network effects, the idea that if you have one fax machine, it's useless and two are very useful and three are twice as useful and four is twice as useful again. And that once a tech company starts to become successful, the network effects snowball and you will never dethrone it, despite the fact that we no longer have Friendster or AltaVista or Amiga's or any of these other potential purveyors of network effects. A close look at how tech companies grew does not show that network effects is what led to that growth. Instead, you see predatory conduct, money ball, using access to the capital markets to raise gigantic amounts of money and buy or merge with all of your competitors as the means by which they grew. And as an example of this, I want you to think about Google for a minute. So Google is a company that has made exactly one and a half successful in-house products. They made a really good search engine and a pretty good hotmail clone. Everything else that they've made in-house died. This is just a small sample of the Google product graveyard. And everything that they've done that's successful, Android, ad tech, YouTube and so on, all of these are companies that they acquired from someone else. So this is not a company that has a natural monopoly due to a network effect. This is a company that has an unnatural and monopoly due to predatory conduct. Now network effects are indeed real. They are a thing and you can see them exemplified pretty well with the Bell system. This was what we called AT&T in the US before it was broken up in the 1980s. But with tech, network effects are very different from other kinds of industries. Think of the railroad industry where once you have rails that run from one place to another, it doesn't make any sense to put in a second set of rails. And so the custom accrues to that rail vendor which can add more rails to more destinations. And before long, you have these natural monopolies emerging in rail. But that's not how it works with technology. And the reason is that technology has interoperability. So built into our general purpose computers and our general purpose network is the ability to run any program provided you can express it in symbolic logic and to interface any new network service with any existing network service. Now oftentimes that interoperability is deliberate and engineered. Someone will go to a standards body like the W3C and decide on what an HTTP header looks like. But just as often that interoperability is adversarial. That interoperability is a form of competitive compatibility where a new company makes a product that plugs into an existing product or service without permission against the wishes of the people who made the existing product or service like Tom Jennings making his IBM PC ROMs. And what happens then is that the walled garden of the company that came before becomes a feedlot in which all of the customers have been handily penned in so that the new market entry can go over and choose whichever ones they want and devour them in a smorgasbord. So to think about how this worked with the Bell system. The Bell system originally had not just a monopoly over the wires but a monopoly over the things that connected to the wires. It was against the law to connect a phone or a phone like device or even a thing that clicked onto a phone to a phone that came from the Bell system or to a jack that the Bell system had installed. And they argued that because they were a monopolist they were part of America's national security and safety apparatus and that allowing third parties to connect things to their network would result in the network being made unreliable and therefore America being made insecure and unsafe. But they didn't use this power just to keep the network operational. They use this power to extract monopoly rents to make money by screwing over their customers by preventing new market entrance. So for example to see how bad this got you can look at where it broke down. The first time that the system broke down was when AT&T sued a competitor called Hush-a-Phone. And Hush-a-Phone was a plastic cup that snapped over the mouthpiece of your Bell phone so that when you were speaking your voice would be muffled and people who were in the same room as you would find it hard to listen in on your conversation. And AT&T argued that the Hush-a-Phone because it was mechanically coupled to the Bell system endangered the integrity of the Bell system. And their regulator told them to go pound sand. They said no this doesn't endanger the system. Get used to it. People can connect things to their phones. And that's when they lost mechanical coupling prohibitions. And then they went after another company called Carter-Phone. And Carter-Phone made a walkie-talkie that plugged into a regular RJ11 jack or that you could connect your phone to. And it was for people who worked on ranches and farms so that they could clip a walkie-talkie onto their belt and go out and work in the barn or ride out on the range and still take their phone calls. And AT&T argued that by electrically coupling devices to the Bell system that they were violating AT&T's monopoly and endangering America. And again, their regulator told them that that was not a valid reason. And they lost the ability to block electrical coupling. And this is where we see the growth of everything from modems to answering machines and all of the other devices that eventually plugged into the Bell system. So interoperability can turn network effects on their head. And interoperability was really key to the growth of the tech monopolist today. So think about the IWorks suite and its history. Before the IWorks suite came about, Apple was in really serious trouble in enterprise networks. If you ran a business, chances are most of the computers in your network were PCs, but maybe the designer or an executive who had the right to decide what kind of computer they would use would be running a Mac. And the way that Microsoft punished you for running that Macintosh in the Microsoft environment was by dragging their heels on updating the Microsoft Office suite for the Mac. And so Macs became this kind of cursed zone where someone were to send a Word file or an Excel file to a Mac, and that file was then opened and saved again. It would never be openable again on any computer anywhere in the world. It would be irretrievably corrupted. And Bill Gates did not fix this because Steve Jobs went to him and asked him pretty please to make a better Microsoft Office suite for the Mac. Instead, Steve Jobs got a bunch of engineers to reverse engineer the file formats, and then they produced IWorks whose pages, numbers, and keynote are used to read and write Office files perfectly. And very quickly, they were able to colonize the Microsoft Office environment, running ads like the Switch ads, where they said, well, you may have hesitated to give up your Windows PC because all of your files are stuck in there. But what if I told you that you could read and write all the files ever created with a Windows system, and you could do it from a Mac by running one piece of competitive compatibility software, a software that was adversarily interoperable with the Microsoft ecosystem. And that is what rescued the Mac from the scrap heap of history. And it wasn't just software, it was also hardware. In the early 1990s, Lexmark was the, or rather the late 1990s, Lexmark was the printer division of IBM, the not very well-reformed monopolist. And Lexmark used little microchips to stop people from refilling their laser toner cartridges. And a company called Static Controls, a little Taiwanese company, reverse engineered that microchip. It only held a 12-byte program, so it wasn't hard. And they made new chips that would allow you to refill your cartridge. Lexmark lost their lawsuit against Static Controls. And so now Static Controls had this huge installed user base of people who were desperate for cheap toner cartridges. And instead of having a network advantage, Lexmark now had a network disadvantage. Today, Lexmark is a division of the company that owns Static Controls. And it's not, of course, hardware. Also, it's also network services. When Facebook first got off the ground, Mark Zuckerberg had a really serious problem, which is that everyone who wanted to use social media already was on the dominant social media platform, a company called MySpace, that was owned by the world's most rapacious, vicious billionaire, Rupert Murdoch. And again, Zuck did not go to Rupert and say, please allow your users to talk to my users, because people want to use Facebook, but they don't want to leave their friends behind. Instead what they did was they made a bot. And you give that bot your login credentials, and it would go to MySpace and scrape the waiting messages that were there for you, and put them in your Facebook inbox. And you could reply to them, and it would pilot them back out to MySpace. Now all of this led to a very dynamic system that completely changed the way that we interacted with technology. But all of this has gone the way of the dodo. And the reason for that is that as these companies acquired new monopolies, they diverted their monopoly rents to foreclosing on competitive compatibility. So you may remember the urgent fight over software patents, the growth of software copyrights, the ongoing problem of anti-circumvention rules that make it illegal to break DRM, most recently seen in the shutdown of YouTube DL. Enforceable terms of service. Facebook has just used its terms of service to try and shut down ad observatory, an academic project that tracks Facebook's compliance with its own policies on political paid disinformation. And they've said that because this service violates their terms of service, it's illegal. Never mind that Facebook had to violate MySpace's terms of service to gain its ascendancy. And then new rights that were purchased with very expensive lawsuits. So today we have the Google Oracle lawsuit going through the Supreme Court in the United States that might create a new copyright over APIs. Now all of these things, patents, copyrights, anti-circumvention terms of service, novel copyrights, they trade under the name intellectual property. And if you're familiar with that phrase, intellectual property, you'll know that free culture activists hate this term. In fact, when you ask them what we should call these things, sorry, there's my software patent slide. I knew I had one in there somewhere. When you ask them what we should call intellectual property, they say, oh, you should call it the author's monopoly, because that's what they call it in the days of the statute of Ann. That's what they call it in the founding in the United States, author's monopolies. And authors get really pissy when you say that they have a monopoly. And they do for good reason, because although formally the fact that I wrote this speech and therefore I have the monopoly over reading it to you, my microphone means that I am a monopolist. I don't have a market power monopoly. I can't use this monopoly to extract monopoly rents from the marketplace. Writers who go to the five remaining publishers soon to be four, if Bertelsman buys out Simon and Schuster, don't get to use the fact that they have a monopoly to negotiate crazy supermarket prices that go beyond what would happen in a competitive market. Unlike, say, the monopolist themselves, the actual monopolist we have who get to charge very high prices for their services. And so it's not a bad point that an author's monopoly is not a monopoly in the way that we talk about it when we talk about monopolism in the tech sector. But IP does have a very precise meaning, a meaning that has nothing to do with intellectualism or property. IP, in this sense of software patents, copyrights, anti-circumvention, terms of service, API copyrights and so on, it has the precise meaning of any law or rule that lets me decide who can criticize me, who can compete with me, and how my customers must behave themselves. And when you fuse a market power monopoly with an author's monopoly, when you have a market power monopoly that has IP behind it, you get something far more durable than either a regular monopoly or an author's monopoly, a copyright monopoly. You get a monopoly that the government will defend rather than dismantling. So for example, if you have a monopoly that you can defend with a patent, like today you have HP monopolizing its ink cartridge market, and they have patents over the security chips in their ink cartridges, the government will seize compatible ink cartridges at the border on your behalf because they violate your patents. And so instead of punishing you for creating a monopoly, the government will reward you by doing your enforcement work for you. And not only that, but once you have a monopoly that's backed by some kinds of IP, like anti-circumvention, the government will punish people who report defects in your products. So if you have a monopoly over printers or if you have a monopoly over phones and someone finds a defect that allows third parties to install their own ink or their own app stores, the circumvention of your DRM becomes a crime under Article 6 of the European Copyright Directive and under Section 12.1 of the Digital Monium Copyright Act and under similar laws all around the world. And the government will both fine and potentially imprison the security researchers who point out that your products have a defect in them. Now I started this talk by saying that early internet boosters were not blind to the perils of technology, but some of them were a little. After all, once all that money started sloshing around, then if you could convince yourself that tech was an unchecked force for good, then you could also convince yourself that getting all of that money that the tech industry was generating would make you on the side of good. The myth of two guys in a garage who could topple billion-dollar giants and become billionaires themselves fired a lot of techies imaginations and sidelined a lot of their conscience. But those days are behind us thanks to monopolies. Thanks to monopolies, founders who want to start businesses that compete with the monopolists, monopolists who have double digit growth every year and who realize tens if not hundreds of billions of dollars in profit collectively, those founders when they go to a venture capitalist or another funder are told that the funders aren't interested in funding these direct competitors. Funders call the lines of business that big tech is in the kill zone, and they understand that any attempt to fund a business that operates in the kill zone will result in your company being crushed by the monopolistic power of the entrenched company. And so instead, if you are a technologist headed to Silicon Valley, you don't dream of changing the world, you dream of like having a mini kitchen with free kombucha and maybe getting massages on Wednesday on the on the behalf of the company. And liberated from the fear of losing customers to competitors, tech has pivoted from liberating customers to manipulating, locking in and abusing their users. And the code that does that manipulation, that abuse and that lock in, it's all written by technologists. Technologists who discovered their passion for the field when they felt the thrill of self determination through writing code and projecting it over networks. And this is a really important fracture line. I think this is a way to understand things like the Googler walkout and tech won't build it, no tech for ice, the solidarity movements against facial recognition and other surveillance technologies that technologists are no longer able to delude themselves with the thrill of billions into thinking that it's okay to do what they've been doing. And one by one, and in increasing numbers, they're starting to wake up to the fact that it's time to do better. It's time to realize the liberatory power of technology and step back from the power of technology to control us that this will all be so great if we don't screw it up. And if we do screw it up, it's going to be really, really terrible. And there are precedents for this. And unfortunately, the precedents are pretty incomplete. So Robert Oppenheimer very famously was one of the few people in the world who was both brilliant enough at being a manager and brilliant enough at being a nuclear physicist that he could lead the creation of the first nuclear bomb and the Manhattan Project in Los Alamos in the United States. And legendarily, as that first nuclear bomb test went off, he turned away from the mushroom cloud and said, I am become death destroyer of worlds and embarked on a lifelong project to demilitarize the atom, to put back in the bottle the genie that he had made. And my hope is that we can arrive at a world in which our Oppenheimers decide to put down their tools before they make their atom bombs instead of after. That we are at this crossroads now where not only are the harms so visible that they're undeniable, but also the rewards for building the digital equivalent of these A-bombs have dwindled from a star in technology's Hall of Fame to a really well-funded pension plan. And surely that is not enough to sell out for. So many of you listening today have probably read my novel, Little Brother and its sequel Homeland. I love that because I often hear from you, especially at events like CCC and Defcon and Shmucon and Thoughtcon and so on. People come up to me and they say, I read your book and I understood both how powerful technology could be and how terrifying it would be if that power was not harnessed for the people and instead was harnessed to oppress people. And it made me embark on my career as a technologist, a security researcher, a human rights activist, a cyber lawyer. And that's the best thing in my life really, apart from my kid and my family. The fact that there are people out there who have devoted their lives to doing something better because of something I wrote, that's really important to me. And frankly, if my kid has got a chance of growing up in a world that doesn't make Orwell look like an optimist, it's going to be in part because of that stuff. But I've written a new little brother book. I wrote this book that came out this year called Attack Surface. And this is genuinely not an ad. You don't have to read it. And in fact, if you're watching this, you probably don't need to, although you might enjoy it because this is aimed at a different kind of technologist. This is a story about the kind of technologist who spends their whole life kidding themselves that working on systems of control and oppression are not that big a deal because if they didn't do it, someone else would do it. There's an endless supply of Oppenheimers. And if I turn my back, someone else will be there to finish my work. Who come to a realization, maybe belated, that they've spent their whole life building a dystopia that they don't want to live in. And who redeem themselves, who come back from the brink. And the reason I wrote this story now is because I really wanted to reach the technologists who are waking up every day and saying, I fell in love with this stuff because it liberated me and I spend my days figuring out how to take away the future of people who might be liberated by it themselves. And that's an urgent message because author's monopolies, IP, are available to everyone thanks to the Internet of Things, thanks to our embedded systems. I'm going to break here and editorialize very briefly. This is the slide I'm most proud of, not for any kind of intellectual heft, but I'm really bad at the GIMP and I think I did a really good job. So I just want to, if you're looking away from your screen, if you're like peeling vegetables or something, spare a glance at this slide. I'm very happy with this slide. So the IoT means that every device in our world has access to an author's monopoly, has IP in it. And that governments will enforce the strictures that the designers and manufacturers of these devices put into them by punishing people who try to use competitive compatibility to undo those strictures. And not only that, but IoT devices are not merely smart as a convenience to invoke the law. They're also smart as a way to enforce the manufacturer's desires, as a way to control the actions of users, of competitors, and of critics, that these devices have a kind of unblinking eye that watches you whenever you use them. And if it catches you trying to do something that might displease the manufacturer's shareholders, it can stop you and it can rat you out to the authorities. And so speaking in my professional capacity as a dystopian science fiction writer, this scares the shit out of me. Now that is all a kind of grim way to end. So I'm going to finish this off with a couple of words on what gives me hope. And this comes from my colleague James Boyle at the Duke Center for the Public Domain. And Jamie, when he talks about the computer liberation movement, he compares it to the ecology movement. And before the term ecology was coined, we didn't have a movement, we just had a bunch of issues. Some people cared about whales, some people cared about owls, some people cared about the ozone layer. And maybe they thought that the people who cared about another issue were doing important work, but it wasn't their work. And they weren't really on the same side. They weren't part of the same cause. But the term ecology changed all of that. The term ecology took a thousand issues and turned it into one movement. One movement that everyone had each other's back on. Even if the reason you were in the movement was owls, you were there to fight the corner of the people who cared about the ozone layer. And you began to understand that these were all facets of the same problem. Well today, monopolies have taken over and destroyed the lives of people in a million ways. Right? Whether you're a professional wrestling fan, a beer drinker, a whiskey drinker, an eyeglass wearer, a plane flyer in, someone who relies on energy or financial services, or whose money was stolen by a company whose auditors were one of the big four accounting firms. Whether you are someone who is upset because there's four movie studios left or three record labels or because there's only one movie theater left in the United States, one movie theater chain of any size left in the United States. But whether you're pissed off that you're not going to get the vaccine, because in the U.S. there's only one company that makes glass bottles of any size. All of these people don't know it, but they're on the same side. They're on the same fight. And that fight is the fight against monopolies. Now people talk about big tech as though they're super geniuses. But when we rip off the mask, we discover that these are not Titans who built monopolies through their special genius. They're just three sociopaths in a trench coat. They're just the latest version of the kind of monopolist that we have been fighting since time immemorial, since the Rockefellers, since the Mellons, since every monopolistic family that tried to establish a dynasty that would allow them to rule as though they were kings, was broken up and relegated to just having their names on a couple of buildings. We know how to deal with these people, and it's time that we dealt with them as for what they are, which is just plain old-fashioned sociopaths and not as super geniuses who stand astride the world like Colossae. Thank you. Thank you. I think I'm back now. All right. Well, thanks for this talk. We are basically out of time, so we're moving the Q&A to the fireside chat, which will happen in I think 20 minutes or so, and then all the questions that have already been asked for this talk will also be answered then. But Cori, I think we had a short stream outage around minute five. If you know what you said at minute five, you can maybe try to recapitulate it. I don't know what I said. Minute five. The history of networks, apparently. Unfortunately, it was not visible for us, so I don't know myself, but I think it was the history of networks. It might have been my story about the alt hierarchy. That's probably it. And if you go to your favorite search engine, whether it's like altivista or AskJeeves or Yahoo, and type in alt.interoperability.adversarial, you'll find an article I wrote for the Electronic Frontier Foundation about the history of the alt hierarchy. So I think that's probably what got cut out. Okay, wonderful. Thank you. I think people know how to use Google. I think Lycos is also indexed it. Yes. They'll try to figure it out. All right. Well, thank you very much. Thank you. I'll see you guys in the fireside chat. See you guys in the fireside chat. Thank you. You
They stole our future. Let's take it back. Here at the end of the world, it's time to take stock. Is technology a force for good? Can it be? Was it ever? How did we end up with a world made up of "five websites, each filled with screenshots of text from the other four" (h/t Tom Eastman)? Should we worry that machine learning will take away our free will through A/B splitting and Big Five Personality Types? Where the fuck did all these Nazis come from?
10.5446/52021 (DOI)
It is with much pleasure that I can now introduce our next speaker. So it's just started raining outside, but this heavy rain is not at all probably the extreme weather effects that we will hear about right now. The weather, the talk that we are being presented next will deal with extreme weather effects and how they are linked with climate change and how we even know about that. Our speaker today is Freddie Otto. She's an Associated Director of the Environmental Change Institute of the University of Oxford and she's also the lead author of the upcoming IPCC assessment report, AR6. And without, with no further ado, I give you the stage Freddie, please. Okay, thank you. Yeah, hi. It stopped raining here in Oxford just about, but it's definitely flooded, so that might actually be something to come back to and talk about with respect to climate change. So whenever we hear or whenever today an extreme weather event happens, we hear about hurricanes, wildfires, droughts, etc., the question that is immediately asked is, was this, what is the role of climate change? And to answer that, for quite a long time, scientists gave an answer that we cannot attribute individual weather events to climate change. But, sorry, okay. But this, because the first, the one answer that people were giving where that, well, you can't attribute individual weather events or they were saying in a, yeah, in a world where climate change happens, of course, every extreme weather event is somewhat affected by climate change. And the latter is trivially true, but that does not obviously provide much information because it doesn't say anything about whether the event was made more likely or less likely or what the role of climate change was. And the first answer that you can't attribute individual events is not true any longer. And this is, and why that has changed and how that has changed and what we can say is what the content of this talk will be. So ultimately, every weather event extreme or not is, if you absolutely boil down to it, is unique and they all have many different causes. So there is always a role of just the natural, chaotic variability of the climate system and climate and weather system that plays a role. There's always a causal factor in whether the, where the event happens, whether it's over land, over a desert, over city, over forest, but also man-made climate change can have an influence on the likelihood and intensity of extreme weather events to occur. And so what we can say now, and what we mean when we talk about attribution of extreme weather events to climate change is how the magnitude and likelihood of an event to occur has changed because of man-made climate change. And in order to do that, we first of all need to know what is possible weather in the world we live in today. So say we have a flooding event in Oxford and the question is, what is this climate change or not? So the first question is we need to find out what type or what kind of event is the heavy rainfall event that leads to the flooding. So is it a one in 10-year event? Is it a one in 100-year event? And in order to do that, you can't just look at the observed weather records because that will tell you what the actual weather that occurred is, but it doesn't tell you what the possible weather under the same current climate conditions are. And so we need to find out what is possible weather. And to do that, we use different climate models. So we simulate under the same climate conditions that we have today, possible rainfall events in December in Oxford. And we might find out that the event that we have observed today is a one in 10-year event. And so if you do this, look at all the possible weather events, you get a distribution of possible weather under certain conditions, which is shown in the schematic on the slide here in the red curve. And then you know that when it rains above, say, 30 millimeters a day in Oxford, then you have a real problem with flooding. So you define that this is your threshold from when you speak about an extreme event. And so you have a probability of this event occur in the world we live in today. Of course, that does not tell you the role of climate change, because in order to know that, you would also need to know what would the likelihood of this event to occur have been without man-made climate change. And so, but because we know very well how many greenhouse gases have been introduced into the atmosphere since the beginning of the Industrial Revolution, we can actually remove these additional greenhouse gases from the climate models atmospheres that we use and simulate a world that would have been exactly as it is today, but without the greenhouse gases from the burning of fossil fuels. And in that world, we can then also ask the question, what are possible heavy rainfall events in December in Oxford? And we might find that the event that we are interested in is in that world not a one and ten year event, but a one and 20 year event. And because everything else is held the same, we can then attribute the difference between these two likelihoods of occurrence of the extreme event in question to man-made climate change. And so, with this fake example that I've just used, we would then say climate change has doubled the likelihood of the event to occur, because one that was one and 20 year event is now one and 10 year event. So, that is basically the whole theoretical idea behind attributing extreme events. And this method can be used. And so, for example, with our initiative that's called World Weather Attribution, we have looked this year at the extreme heat in Siberia at the beginning of this year that, amongst other things, led to temperatures above 38 degrees in the city of Varchuansk, but also led to permafrost thawing and large wildfires. And that event was made so much more likely because of climate change that it's almost would have been impossible without climate change. So when we did the experiments and the models, it's a one and 80 million year event in a world without climate change. And it's still a relatively extreme event in today's world, but it is possible. So this is the type of event where climate change really is a game changer. Another event that we have looked at is Hurricane Harvey that hits the Houston in Texas in 2017 and caused huge amounts of damage with the rainfall amounts abroad. And several attribution studies doing exactly what I've just described found that this type of, so this extreme rainfall associated with the hurricane like Harvey has been made three times more likely because of climate change. And colleagues of mine, Dave Frayman, his team have then used this study to figure out how much of the economic costs with this hurricane can be attributed to climate change and found that of the 90 billion US dollars that were associated with the flood damage from Harvey, 67 billion can be attributed to climate change. Which is in particular interesting when you compare that to the state of the art economic cost estimations of climate change in general, which had estimated only 20 billion US dollars for 2017 in the US from climate change. And of course, not every year is an event like Harvey, but it shows that when you look at the impacts of climate change in a more bottom up approach, so looking at the extreme events which are how climate change manifests and affect people, you get very different numbers as if you just look at large scale changes in temperature and precipitation. But of course, not every extreme event that occurs today has been made worse because of climate change. So this is an example of a drought in Southeast Brazil that happened in 2014, 2015, where we found that climate change did not change the likelihood of this drought to occur. So it was a one in 10 year event in 2014, 2015. And also without climate change, it has very similar likelihood of occurrence. However, what we did find when we looked at, okay, what else has changed? Why has this drought that has occurred in a very similar way earlier in the 2000s and also in the 1970s with much less impact? We looked at other factors and found that the population has increased a lot over the last or over the beginning of the 21st century. But in particular, the water consumption in the area and the water usage has increased almost exponentially and that explained why the impacts were so large. So this is what I've just said is sort of basically the very basic idea and how in theory these studies work and how and some results that we find. In practice, it is usually not quite as straightforward because while the idea is still the same, we need to use climate models and statistical models for observational data to simulate possible weather in the world we live in and possible weather in the world that might have been. That is in theory straightforward in practice, it's often relatively difficult. But what you see here is how the results of these studies look when you don't use schematic. And if you're not a hydrologist, this might be a bit of an unfriendly plot, but it's basically the same as the schematic that I've showed in at the beginning, but just plotted in a way that you can see the tails of the distribution particularly well, so where the extreme events are. So on the x-axis, we have the return time of the event in years on a logarithmic scale and on the y-axis, you see the magnitude of the event and that defines what our extreme event is. And this is actually a real example from heavy rainfall in the south of the UK. And you can see here in red, each of these red dots that you see on the red curve is a simulation of one possible rainfall event in the south of the UK in the year 2015 in the world we live in today with climate change. And the dashed line indicates the threshold that led to flooding in that year and on the x-axis, when you go down from the dashed line, you can then see that this is roughly a one in 20 year event in the world we live in today. And all the blue dots on the blue curve are simulations of possible heavy rainfall in the south of the UK in 2015 in a world without man-made climate change. And you can see that these two curves are different and significantly different, but they are still relatively close together. And so the event in the world without climate change would have been a bit less likely. So we have roughly a 40% increase in the likelihood. But still other factors like just the chaotic variability of the weather. And also, of course, then other factors on the ground where houses build in floodplains and so on play an important role. So this is the actual attribution step. So when we find out what the role of climate change is. But of course, in order to do that, there are a few steps before that are crucially important and absolutely determine the outcome. And the first step, the first thing to find out is what has actually happened. Because usually when we read about extreme weather events or when we hear about extreme weather events, you see pictures and newspapers of flooded parts of the world. And so you don't usually have observed weather recordings reported in the media. And the same actually is true for us. So we work a lot with Red Cross and they ask us, OK, we have this large flooding event. Can you do an attribution study? Can you tell us what the role of climate change is? Then we also just know, OK, there is flooding. And so the first step is we need to find out what is the weather event that actually caused that flooding. And that is not always that straightforward. And this is what you see here on this slide is a relatively sharp example, but not an untypical. So it's of an extreme rainfall event on the 10th of November 2018 in Kenya. And on the left hand side is one data product of observational data, of observational rainfall data that is available. And on the right hand side is another product showing the same event. And the scale, which I failed to say on the slide, is in millimeters per day. And so on the left hand side, you have extreme rainfall of above 50 millimeters per day, which is considering that, for example, in my hometown town of Kiel, there is about 700 millimeters of rainfall per year. You can see that 50 millimeters in a single day is very heavy rainfall. Whereas in the other data product, you don't see as much rain. You still see large rain, but it's not in the same magnitude. And it's also not exactly in the same place. And so given that most countries in the world do not have an open data policy, so you can't actually get access to the observed station data, but you have to use available, publicly available products like the two have shown here, you have to know and you have to work with experts in the region to actually know who hopefully have access to the data to actually find out what has happened in the first place. But of course, if you don't know that, or there is not always a perfect answer to that question. But if you don't know what actually your event is, it's very difficult to do an attribution study. Assuming though you have found a data product that you trust, the next question then is, what is actually the right threshold to use for the event? So if you're flooding that was pretty obviously caused by a one day extreme rainfall event, then that would be your definition of the event. But it could also be that the flooding has been caused by a very soggy rainy season. So actually the really the real event you would want to look at is a much longer time scale. Or if the flooding occurred mainly because of some water management in the rivers and has actually flooded further upstream, your spatial definition of the event would be very different. And what you see here on this plot is an example of a heat wave in Europe in 2019. And what it usually makes the headlines is the maximum daily temperature. So if records are broken, so you could use that as a definition of the event that you're interested in. But of course, what really causes the losses and damages from extreme events is not necessarily the one day maximum temperature, but it is when heat waves last for longer, and especially when the night temperatures are also high and not just the daytime temperatures. So you might want to look at an event over a five day period instead of just the maximum daily temperatures. Or and this is sort of why I've shown the pressure plot on the right hand side, which is really just an illustration. It's not terribly important what's on there, but there are of course different weather systems that can cause heat waves, especially in the area here in the south of France. It could be a relatively short lived high pressure system bringing hot air from the Mediterranean, or it could be something that is caused from a long time stationary high pressure system over whole of Europe. If you want to take that into account, obviously also your event is different. And there is no right or wrong way to define the event because there are legitimate interests in the maximum one daily temperatures, legitimate interest in just a specific type of pressure system and interest in what actually causes more excess mortality on people would be the three day or longer type of heat waves. But whichever definition you choose, it will determine the outcome of the study. And here are some typical results of attribution studies. When you look at them in a slightly more scientific way and slightly less, just the headline way is the ones that I've shown earlier. Because of course, what also is important is not only how you define the event depending on the impact and depending on what you're interested in the extreme event and what observational data you have available. But of course, there's also then the question of what models, what climate models do we have available? And there's always some trade off between what exactly caused the event and what we can meaningfully simulate in a climate model. And then all climate models are good for something and bad for other things. So there always need to be a model evaluation stage. So where you test if the models that you have available are actually able to stimulate in a reliable way the event that you're interested in. But even if you have done all this, it can sometimes be that the models and the observations that you have show very different things. And so the heat wave in Germany in 2019, which was also on the slide before, is an example of that. Well, when we look at the long term observations of high temperatures and see how they have changed over time, we find that because of the change in climate we have observed, the likelihood of this type of heat wave has increased more, yeah, about 300 times. So what you see on the, you see this in the blue, the black bar, the black bar in the middle of the blue bar on the left hand side at the very top where it says DWD, that's the Deutsche Wetterdienst observations. And we see that where this black bar is about, again, a logarithmic scale about 300 times more likely. But of course, because we have only 100 years worth of observations and summer temperatures are extremely variable, there's a large uncertainty around this change. And so we cannot, from the observations alone, we cannot exclude 100,000 times change in the likelihood of this heat wave, but similarly also not a 20 times heat wave. But what the main point is that in all the climate models and all the red bars that you see on there are the same results, but for climate models. So where we have compared today's likelihood of the event to occur with the likelihood in the world without climate change, and you see that the change is much lower. And of course, climate change is not the only thing that has changed and that has affected observed temperatures. But other factors like land use change and things like that are much smaller in the size than the climate signal. So they cannot explain this discrepancy. So this means that the climate models we have available for this type of study have obviously a problem with the extreme temperatures in a small scale. And there are effects that we don't get understand. And so we can't say, OK, the heat wave was made 10 times more likely, but we can only say that with our current knowledge and understanding, we can say that climate change was a little as an absolute game changer for this type of heat wave, but we can't really quantify it. On the right hand side is a much nicer result on the top one, which is for extreme rainfall in Texas 2019. And nicer result, I mean, now for scientists, so in a scientific way. So we have in blue two different types of observations from the heavy rainfall event. And they both show pretty much exactly the same result. And also the two climate models that we had available that passed the model evaluation tests show an increase in the likelihood of this event occur that is very similar to that in the observations in terms of order of magnitude. And so in that case, we can just synthesize the results and give an overarching answer, which is that the likelihood of this event to occur has about doubled because of man-made climate change. And the last example that I that is here is for drought in Somalia in 2010, where not only the observations are extremely uncertain. So from the observations, you could say we could have both an increase in likelihood or a decrease in likelihood by a factor 10. But also the climate models show a very, a very mixed picture where you can't even see a sign that that is conclusive. So in that case, you can say, well, we can exclude that climate change made this event more or less than 10 times, yeah, more than 10 times or less than 10 times more likely, but we can't say anything more. So we can exclude that it's a complete game changer like we have for heatwaves, for example, but that's about the only the only thing that you can say in for a result like this. So this was sort of the most detailed scientific stuff that I would like to show, because I think it's important to get some background behind the headline results that that would just be climate change double the likelihood of this event. So there are always four possible outcomes of an attribution study are priori. And that is because because climate change affects extreme weather in two ways, basically. One way is what we would call the thermodynamic way, which means that because we have more greenhouse gases in the atmosphere, the atmosphere overall gets warmer. So you have on average an increase in the likelihood of heatwaves, a decrease in the likelihood of cold waves. A warmer atmosphere can also hold more water vapor that means that needs to get out of the atmosphere as rainfall. So on average, from the warming alone, we would also have more extreme rainfall. But then there's a second effect, which I call the dynamic effect. And that is because we've changed the composition of the atmosphere, that affects the atmosphere circulation. So where were the systems develop, how they develop and and how they move. And this effect can either be in the same direction as the warming effect. So it can be that. Well, we expect more extreme rainfall, but we also get more low pressure systems bring rain so you get even more extreme rainfall. But these two effects can also counteract each other. And so if you if you you can expect more rainfall on average, but if you don't get the weather systems that bring rain, you either have no change in likelihood and intensity or if the dynamics win, you have actually decrease in the likelihood of extreme rainfall in a particular season or region. And so this is why our priority that can always be for outcomes, it can be that the event was made more likely, it can be that it was made less likely, it can be there's no change, or it can be that with our current understanding and tools, we can't actually answer the question. And so this has been possible to do now for for about a decade, but only in the last five years really have many, many people or many scientists started to do these studies. And so there is actually a relatively large. Yeah, there are there are lots of attribution studies on different kind of extreme events. And what you can see on this map here is what the news and energy outlet Carbon Brief has put all these studies together. And you can see in red where climate change made the played an important role in blue, but climate change did not play a role and in gray, that was an inclusive in conclusive result. This is very important, though, that this is not representative of the extreme events that have happened. This is just represents the studies that have been done by scientists. And they have and they are, of course, bias towards where scientists live and also towards extreme events that are relatively easy to simulate with climate models. So there are lots of heat waves in Europe, Australia and North America, because that is where people live. And on this next map, I have tried to put to show the discrepancy between the extreme events that have happened and those for which we actually do know the role of climate change. So here in red, our death associated with extreme events since 2003. So since the first event attribution study, and it's death from heat waves, storms, heavy rainfall events and droughts primarily in different parts of the world. The bubble is always on the capital of the country. And the larger the bubble, the more death due to extreme events in those years. And in black overlaying that are those deaths for which we know the role of climate change. So that doesn't mean that the deaths are attributed to climate change, but it means that they are there. We do know whether or not and to what extent climate change played a role. And you can see that most of the European countries, the black circle is almost as large as the red one. So for most of the extremes of most of the deaths associated with extreme events, we do know the role of climate change. But for the, but for the, for many other parts of the world, there are no very small black circles. So for most of the events and the deaths associated with them, we don't know what the role of climate changes. And I've used death here, not because I'm particularly morbid, but because it's an indicator of the impacts of extreme weather that is relatively good comparable between countries. So this means that with event attribution methods that we have developed over the last decade, we know of the tools available to do, to provide an inventory of the impacts of climate change on our livelihoods. But we are very far from having such an inventory at the moment because most of the events that have happened, we actually don't know what the role of climate change is. And so we don't know in, in, in detail and on country and scale and on the scale where people live and make decisions what the role of climate change is today. There's another slightly related issue with that is that the extreme events that I've used to create the map I showed before with the death of climate change, with the death of extreme weather events, they are from a database called EMDAT, which is a publicly available database where losses and damages associated with disasters, technological disasters, but also disasters associated with weather are recorded. But of course, they only can record losses and damages if these losses and damages are recorded in the first place. And so what you see on this map is in gray and then overlay with different, with different cycles are heat waves that have occurred. They have occurred between 1986 and 2015 on this map, but you could just draw a map from 1900 to today and it would look very similar. And that shows lots and lots of heat wave reported in Europe, in, in the US, India, but there are no heat waves reported in most of sub-Saharan Africa. However, when you look at observations and also we see that extreme heat has increased quite dramatically in most parts of the world and a particular hotspot is sub-Saharan Africa. So we know from when we look at the weather that heat waves are happening, but it's not registered and it's not recorded, so we have no idea how many people are actually affected by these heat waves. And so we then of course don't do attribution studies and don't find out what the role of climate change in these heat waves is. So in order to, to really understand the whole picture, we would also need to start recording these type of events in, in other parts of the world. And so my very last point before I hope that, that you have questions for me is of course everything I've said so far was talking about the hazard. So talking about the weather event and how climate change affects the hazard. But of course that is not the same or translates immediately into losses and damages because whether or not a weather event actually has any impacts at all is, it's completely driven by exposure and vulnerability. So who and what is in harm's way. And I've already shown, I've already mentioned the example early on with the, the drought in Brazil where the huge losses and damages were to a large degree attributable to the increase in water consumption. And, and, and that's therefore in order to really find out how climate change is affecting us today. We not only need to define the extreme events so that it, that it connects to the impacts but also look into vulnerability and exposure, what is changing, what there and what, what are the important factors. But we can do that. And so we have really made a lot of progress in understanding of how climate change not only affects global mean temperature, which we have known for centuries and, and how it affects large scale changes in temperature and precipitation, which we have also known for a very long time. But we now have actually all the puzzle pieces together to really understand what climate change means on the scale where people live and where decisions are made. We just need to put them together. And one lens or one way of where they are currently put together is, for example, in courts. And so because it's obviously people who experience losses and damages from climate change. And so one way to address that is going through national governments, local governments, hoping for adaptation measures to be put in place. But if that's not forthcoming quickly enough, there is the option to sue. And so this is one example, which is currently happening in Germany, where a proven farmer is suing RWE to, to basically pay their share of a adaptation because of a largely increased flood risk from glacier melt in the area. And they want RWE to pay from their contribution to climate change, where their emissions, and then have some funding for the adaptation measures from them. And that is one example of where these kind of attribution studies can be used in a very direct way to, to hopefully change something in the real world. And with this, I would like to end and leave you with some references and hope you have some questions for me. So, yeah, herzlichen dank für den Vortrag. Ich habe, bevor wir zu Q&A kommen, muss ich einmal mich im Namen der Produktion bei den Zuschauern entschuldigen. Ich habe jetzt keine Fragen aus dem Chat bisher, aber vielleicht eine Frage von mir. Das letzte Beispiel war ja, war ja ein Fall einer Klage über, über Ländergrenzen hinaus, quasi. Ist das ein Ansatz, den wir in Zukunft öfters sehen würden, das heißt, dass über Ländergrenzen hinweg Menschen oder Organisationen sich gegenseitig versuchen, quasi über den Klageweg auf den richtigen Weg zu bringen? Also, es ist tatsächlich eine Ausnahme, dass das im Fall RWE und Lujia funktioniert. Denn das deutsche Recht sieht vor, dass Firmen, die in Deutschland ansässig sind, auch verschieden verantwortlich sind, die nicht in Deutschland stattfinden. Sorry to interrupt. I just realized that we are still in English. Talk. Sorry for that. So, no worries. So your question was if we are going to see more international court cases where across countries, across nation states, we have climate litigation. And this type of litigation that I've just shown as the example is in so far an exception as in German law, a company is also responsible for the damages caused outside of Germany, which is not the case, for example, for companies in the US or so. So, and this is why Lujia sued RWE and not, for example, ExxonMobil. But these type of cases where this Lujia case is an example, we see a lot of them, an increasing number of them each year. And they are difficult to do across nations because the German law is exception on that case. But there are other ways, like for example, why are human rights courts that can be done across nation states and that is also happening. So it's at the moment, it is still legally not super straightforward to actually win these cases. But increasingly, a lot of lawyers working on that so that we will see a lot of change in that in the coming years. Okay, thank you. In the meantime, there appeared some questions from the chat and from the internet. I will go through them. First question is, are the results of the individual attribution studies published as open data in a machine readable format? So all the studies that I've done with my team with World Weather Attribution, so there all the data is available and it's available on a platform that's called Climate Explorer. So that should be machine readable. And this is deliberately because we want to make it as transparent as possible so everyone can go away, use our data and redo our studies and find out if we've made any mistakes. But this is not the case for all the studies that exist because most of them or many of them are published in peer-reviewed journals and not all peer-reviewed journals have open data and open access policies. But increasingly, journals have. So if you, for example, go to the Carbon Brief website and look at the map of studies that you have for links to all the studies and a lot of them have the data available. Okay, maybe a follow-up to this one. The next question is, are the models somehow available or usable for a wider interest public or is APC required? I'm not quite sure what APC means. So the model data is publicly available. From, and this is one reason why we have been able to do these studies because until relatively recently model data was not publicly available and only scientists working in a specific country could use the model developed in that country, but now all the model data is shared publicly and people can use it. So it's definitely there and usable. It just requires some expertise to make sense of it. But yeah, people can use it. Okay. The next question is, to what certainty can you set up the counterfactual models, which are an important reference to your percentage value and what data are the base for these models? So the counterfactual simulations are, so the climate models we use are basically the same models that are used also for the weather forecast. We are just running lower resolution, which I guess most of this audience knows what that means. So the data points are further apart so that it's not so computing intensive. And these models, they are tested against observed data. And so that is how we do the model evaluation. So that is the simulations of the present day. And for the counterfactual, we know extremely well how many greenhouse gases have been included into the atmosphere since the beginning of the Industrial Revolution. So there is a very large certainty with that number. And we remove that from the models' atmosphere. So the models have exactly the same setup, but a lower greenhouse gases, lower amount of greenhouse gases in their atmosphere, and then are spun up and run in the same way. So the, but of course we can't test the counterfactual. And so that means that we assume that the sort of the weather was still the same physics will still hold in the counterfactual. And that the models that are developed using present day represent the counterfactual, which is an assumption, but it's not a completely unreasonable assumption. Because of course we have now decades of model development and have seen that indeed climate model projections that have been made 30 years ago have actually come to have been realized in pretty much the same way on a large scale that they have, as they had been predicted 30 years ago. And so that assumption is not, yeah, it's not a big assumption. So the counterfactual itself is not a problem, but of course also the present day model simulations, they are very far from perfect. And there are some types of events which state-of-the-art climate models just can't simulate until where we can say very little. So while for example for hurricanes, we can say with high certainty about the rainfall associated with hurricanes, the hurricane strength itself and the frequency of hurricanes is something which is very difficult to simulate with state-of-the-art models. So our uncertainty there is much higher. Okay. And then, well, one question that emerges from all this is of course if we know this much and way more than in the past, how are politicians still ignoring that information and how can we convey that into their minds? Well if I knew the answer to that, I would probably not be standing here, but actually doing politics. But I think it takes a frustratingly long time for things to change and things should change much faster, but we actually, these last two years have shown huge progress I think in terms of also putting climate change on the agenda of every politician. Because that's largely due to the Fridays for Futures movement, but also to degree I think due to the fact that we now actually know that the weather that people experience in their backyard and pretty much independent of where that backyard is, is not the same as it used to be. And so people do experience today climate change and I think that does help to bring a bit more urgency. And of course, well I've said everyone has climate change on their agenda, which was very different even two years ago, where there were lots of people who would never talk about climate change and their political agendas played no role. It doesn't mean that it has the right priority on that agenda, but it's still a huge step forward that has been made. And so I think we do know some things that do work, but we just, we have to just keep doing that. Yeah, I don't think I can say more, I don't have a magic wand to change it otherwise. Maybe some other point of impact, one of the question is, is it possible to turn the results of attribution studies into recommendations for farmers and people who are affected in a financial way by extreme weather and how to change agriculture to reduce losses from extreme weather effects? Yes, absolutely. So that is one of the most useful things of these studies is, well on the one hand to raise awareness, but on the other hand if you know that you have experienced that has led to losses is a harbinger of what is to come, then that is incredibly helpful to know how to, that agriculture practices might need to be changed or that insurance for losses from agriculture might need to be changed. And so this is exactly why we do these attribution studies because not every extreme event is always, because it has always shows the fingerprints of climate change. And if you know which of the events are the ones for climate change, it's a real game changer, you also do know where to put your efforts and resources to be more resilient in the future. And for financial losses, it is on the one hand, you can use these studies to find out what your physical risks are for your assets and how they, and of course everything that I have said, comparing the counterfeiture with the present, we can do and we do this also with the future. So you can also see how in a two degree world, the events, likelihood and intensities are changing. And of course you can then also in a less direct way use this kind of information to see, to assess what might be other risks from where might be stranded assets, what are other risks for the financial sector, for the financial planning, where could liability risks be and how could they look like. So there is, because extreme weather events and their changes in intensity and magnitude is how climate change is manifesting, it really connects all these aspects of where the impacts of climate change are. Okay, last question for today. I hope I can get that right. I think the question is if there are studies on how we cultivate fields in agriculture, how does this impact the overall climate in that area? The example here is it only an increase in water consumption was directed to Sao Paulo or might there also be a warm wall created by monoculture in central Brazil? So yeah, I don't know details, but land use changes and land use does play a role on the one hand, it affects the climate. So if you have a rainforest, you have a very different climate in that location as if there is savanna or plantation. And also, of course, if you have monocultures, your losses are larger usually as if you have different types of agriculture because in a monoculture everything is in exactly the same way vulnerable. And so that does land use change plays a hugely important role with respect to the impacts of extreme weather. And that is one thing to look at when I was talking about looking at vulnerability and exposure. And of course, also changes in the hazard are not just because of climate change, but also because of land use change. And you can use exactly the same methods, but instead of changing the CO2 or the greenhouse gases in the atmosphere of your model, you can change the land use and then disentangle these different drivers in hazards. OK, really Otto, thank you very much for your presentation and for the Q&A. It was a pleasure to have you with us. And if you have any questions, any more questions, I guess there are ways to contact you. I think your email address and contact details are in the far plan for all the viewers that want that have way more questions. And I don't know, do you have access to the 2D world and do you explore that? Given that I don't know what you mean, probably not. OK. But it would be... That can also be changed. Yeah, it's the replacement for the Congress place itself. But anyway, if you viewers and you people out there have any more questions, contact Friddi Otto. And thank you again very much for your talk and have a nice Congress. I love you. Thank you.
"Listen to the science" is relatively easy when it comes to mitigating climate change, we need to stop burning fossil fuels. However, climate change is already here, in this talk I'll focus on what the science has to say on extreme weather and losses and damages. For a long time it has not been possible to make the - arguably for the day to day life of most people crucial link – from anthropogenic climate change and global warming to individual weather and climate-related events with confidence but this has changed in recent years. Quantifying and establishing the link between individual weather events, that often lead to large damages, has been the focus of the emerging science of extreme event attribution. Even if a comprehensive inventory of the impacts of climate change today is impossible, event attribution allows us to understand better what climate change means. Arguably even more importantly, disentangling predictable drivers of an extreme event like anthropogenic climate change, from natural variability and changes in vulnerability and exposure will allow a better understanding of where risks are coming from and in turn how they can be addressed. Extreme events open a window to address the problem of exposure and vulnerability. Scientific evidence of the importance of different drivers is essential to avoid playing blame games and allows instead for a well-informed debate about addressing risk.
10.5446/52022 (DOI)
Welcome to the RC3. Hello and welcome at the Franconian stage. Sorry for the delay. We had some technical problems. I want to present you an interesting talk from Miro. He talked about names. If you think about names, names can have first name, last name, middle name, prefix, postfix, but think about encoding. Miro will tell us about the difficulties he has. My name is Miro Slav Štědívi. My name is Miro Slav Štědívi. I'm going to speak about names in IT and programming. Python is an example programming language. If you're using a different one, it may probably apply by the same extent. I'm going to speak about strings and bytes, about encoding, about normalizing, case-folding, sorting, regular expressions, and about the names on the web. They are formed, how they consist of different parts, prefix, first, middle, last, prefix names, and about the allowed character. In Python 3, we have strings and bytes that are two different objects or types. We have more than one million different code points in Unicode. These strings are only in memory. The standard old 256 different possibilities of eight bits that you have in a byte. One character can consist of several bytes because there are many more than 256 different possible characters. In the memory, you work with those strings and at the end, you convert it into bytes when you want to save it to a disk file or you want to send it over network. If your name is Chagnaris, you don't need any encodings, but now, of course, the string Chagnaris will be encoded into bytes using the.incode method. The other way around, you can decode them back to Chagnaris. Both look the same. In Python 3, the default encoding is UTF-8 and Chagnaris looks the same in UTF-8 or in the original characters. Chagnaris is 12 characters or 12 bytes when it is encoded as UTF-8. If you are from Germany and your last name is Miller, then it will be a little bit different because in UTF-8, the U with the resist, so umlaut, doesn't fit into one single byte and you need two bytes. So there are two bytes that will save or that will encode your U with umlaut. If your last name is Chinese, of course, this is not the last name. This means Nihao. Hello. The two Chinese characters are encoded as six bytes in UTF-8. So it works one way, the other way around. And if you know that your bytes are in UTF-8, you can read them and decode and you will get your original Chinese character. But there are also other encodings apart from UTF-8. UTF-8 is great because it works for most for Unicode and for most characters that we need. But earlier, earlier, earlier, there was ASCII. And ASCII knows only seven bytes and only 128 encoding possibilities. So there is a limited number of characters that you can encode directly. In the case of TractMorris, it works. But if your name is Miller, then you will need something like Latin1 encoding. Latin1 or ISO 8891 is a coding that works very well for some Western European languages like German, French, Spanish, Italian, and other languages. And it knows or it has the information that these several characters can be encoded into the respective bytes. But the U with the umlaut has a place in this Latin1 encoding table. But many other characters don't have because you have quite limited number of possible characters that can be encoded. My last name, Szyliwi, which means gray haired in Czech and Slovak, it comes from Czechoslovakia, cannot be encoded in Latin1. In the languages like Czech, Slovak, Polish, Hungarian, and in other central European languages, we have used Latin2, which has, which knows some other characters on the place of the characters that are encoding Latin1. So the S with Karen has a place there, but it is not encoded in Latin1. You need to encode it in Latin2. And then, living in Germany, sometimes I get packages post mail from some German companies that do such stuff. And you see that I think they have some problem with my last name because on the web form, my last name is written correctly, Szyliwi. But on the sticker, on the package, the Szy is replaced by a question mark. And now I just wanted to know, why is it like that? Because Szy and question mark, they look quite similar, but that's not a problem. How is it possible that they encoded my Szy as a question mark? In Python, if I encode my last name Szyliwi into Latin2, it will work. But if I encode it as a Latin1, then of course it will not find the S with Karen, the Szy. Character and it will raise an exception. Unicode encode error, I don't know the Szy because it is not contained within Latin1, as I said. But I receive a package, I don't receive an exception. So probably there is some possibility how they do fix it the way that it works and that I get my package, although with a wrongly printed name. And yes, there is a function or the parameter in Python where you can encode to Latin1. And if there is an error, you replace the character. Default for errors is a raise, to raise an exception, as in the first case. But if I tell it, if there is an error, replace it, it will be replaced by a question mark. Why they chose a question mark, no idea. It is not configurable, but there is a small hack that gives you the possibility to replace your character, missing character by something else. But this is probably how that big company encoded my last name and converted to Latin1 and that's where the question mark comes from. And you can write a short one-liner in Python, for example, like this, where I say if there is an error, use the replace randomly function. And in that case, it will just put a random number. You can write something else, some funny character that will be printed there instead of a missing character. And in this case, I got a five instead of S. Well, a five looks more like Sch, then a question mark looks like Sch. That's fine. There are some other big companies in Germany. One, I will not tell the name, but they have beautiful big trains all around the country. On the mail that I get on the customer card, on the online tickets, they manage to use always a different encoding and always to write my last name differently. There is another big company that has big airplanes. When I wanted to buy a ticket and I've write my last name, they tell me you can only enter letters in the adults last name field. My last name consists of letters. What is a letter? In Python 3, you can call a variable, you can name a variable using any character that looks like a letter. So for example, I could do something like that. But I cannot name a variable, for example, a question mark or a smiley. Where does Python know that Sch is a letter and question mark or a smiley or some arrow is not a letter? If you import the standard to Unicode data library, there are some functions that will give you the possibility to investigate, to inspect how the characters look like. Now I have a few characters, a umlaut, the sharp S, lowercase uppercase, yes, there is uppercase sharp S, and the dot and the question smiley. Then I ask for the category and for the name. What you see in the first column is the character, then you see the two characters, LL, LU, ZS, P, O, SO. This is the category of the character. If it starts with an L, it means it is a letter, and then if there is a U or L, it means it is an uppercase letter or a lowercase letter. ZS, P, O, SO, R, different other categories for symbols, for numbers, digits and so on. And then you see the name. So for example, Latin small letter A. This is the list of all categories. And actually every character in the Unicode table has a category, belongs to some category. And then there is a possibility to access the information behind that. So if it looks like a letter, I can use it as a letter. If it looks like a number, I can probably get the decimal value of this number or of this character, even if the character doesn't look like a number. If it is a number or some other alphabetical number. There is the character maps app that will also give you all the information about the character and where you can see also the category and the name and all the characteristics. Key folding. Key folding is the possibility to switch between lowercase and uppercase letters, which works in Latin alphabet, in Greek alphabet, in Cyrillic alphabet. It doesn't work in Chinese. It works in some alphabets. But in our case, we did it. So if I have some characters in lowercase, I get uppercase version and vice versa. There are some exceptions. So for example, the sharp s, uppercase version of the sharp s, there is an uppercase sharp s, but it will, Python converts it to uppercase s s, which is probably wrong. And the other way around, if I have uppercase sharp s and I convert it to lowercase, then it will work. But this is not a symmetrical operation. So, and yes, and this works for all characters that are lowercaseable or uppercaseable. But this doesn't work always correctly. So we have seen here the case of sharp s. And there is also one other case that is contained even within ASCII, even within the basic 26 letters of the Latin alphabet. And this is something that we don't see as a problem, but we broke it to some other languages. And that is the case of I. You see the difference between lowercase I and uppercase I. There is one tiny difference, the dot. There is lowercase I with dot and uppercase I without dot. And there is a language, at least one or several family of languages that distinguish between these two eyes. It is, for example, Turkish. The I with dot is pronounced as E and the I without dot, dot less I is pronounced as I. And now imagine that if you want to have, if you have some Turkish text, not you, but your Turkish colleagues, and they wanted to convert between uppercase, lowercase, it can be wrong. And sometimes it can be so wrong that even the word can mean something different if it is with uppercase, with dot or without dot. So our Turkish colleagues, they have actually to make a workaround. They have to import ICU, which is the international components for Unicode library, and then import their locale and then convert it this way. So that's a little bit more complicated. Normalizing is something that you probably don't see usually, but this is the normalizing, this is the decomposition of characters into their parts. For example, we have the German word, Zeus, which means sweet. And I have two words that look the same. The first one, three characters. The second one is normalized, NFD form. What does that mean? I take again my tiny script that shows me all the characters within the stream. And you see that the first word contains three characters. The second one contains four characters. And the difference is with this U with umlaut. In the first case, it is U with umlaut as a one character, Latin small letter U with the R as is. In the second case, it is, there are two characters. The first one is Latin small letter U, and the second one is a combining the R as is. And one thing you see that the column in the second word, this row with this line with combining the R as is, is moved a little bit by one character. That means that the combining the R as is doesn't have a width. So it is, it has zero width. It is just, it is just glued to the character before. And then it looks like a U with umlaut. And there are plenty of combining characters that allow you to combine. You can actually make a sharp as with the R as is on the top or combine any carry most characters with other combining characters. This is an example. This is a stegover flow. Answer to the question how to parse HTML with regular expressions. Of course you can't. But what you see at the end, these are just characters with plenty, plenty of random combining characters just after them. And it looks cool. Alphabetic sorting. There is built in the function in the Python sorted that will get a list or a string, which is actually a list of characters and will sort them according to some rules. If you have some numbers, great. It's easy. But if you have characters, it will store them. And what you see is that in the example that at the beginning I have capital like uppercase a ou the lowercase a ou then with umlaut uppercase then sharp as the a ou lowercase then I have some central European like Czech slow action. And then the sharp as uppercase it comes at the end. So actually the num the order looks doesn't look very natural. And this is because all these characters are converted to unicode code points like the numbers like their position in the unicode table. And then they are sorted the curbing to them and sharp as uppercase came later so it is at the end. This is not what you would like to see as an alphabetical list in a fold list or a list of some names or so. We want to sort now according to the German language because sorting according ever to every language, it may be a little bit different. So we import the local and we say our local is German. And we sort these characters using the locale strings x firm method. And it will look better because I have first I have both a's then I have a with the out and then I have B. So this is how it should look like in a German dictionary or in a German phone list or list of names. But if I have Swedish user for them seeing a with umlaut between a and B is not natural Swedish expect the out characters at the end of our list. After that. So if you have the Swedish alphabet, it should be at the end. In Hungarian, there is the chest sound, which is written as CS and CS doesn't get between CR and CT CS is as a special character between C and D. So this word cheaper, which means a sharp as a chili cheaper comes after all the C words. In Czech and Slovak, we have also the chest sound, but we've right it with C with Karen. We have you have seen already in my last name, which is so with Karen. Plenty of characters like slow like alphabet has 43 letters with all the possible notes. And another thing is, for example, the CH, which is the sound, which is also alphabetically between H and I. But there are also exceptions. If you have a two words glued together and the first one ends with a C and the second one starts with an H, it is not. It is so and then it is sorted differently. In French, it is even more interesting. They sort something from the beginning of the word. Some other things sort from the end of the word. So usually when they sort words, they sort everything according to ASCII form. And then they look if there are some like these four words that have the same ASCII form, they look at the end at the last syllable. And then you see the first two words, they have on the E they have nothing and the other two words, they have accent aegy. So first come the two words where the last syllable is without accent and then the words with the last syllable with accent. And then within these two, they sort according to the penultimate syllable. So the before last and so on and so on. So that's French. That's okay. If you have seen their keyboard layout, you understand why they are doing it like that. The problem is that locale is connected to the process. So it means if you do something like this, like set locale in your code, then if it is a library or if it is a website with plenty of users, with plenty of threads running, this will change the locale of the process. And this is not what you want because if you have two users, one of them clicks, I want to see sort it all is according to German rules, Swedish rules. Then they will just break it to each other. And there's a problem because locale is connected to the process. But we have seen already this ICU library that allows you to use the locale as objects, object oriented, and that allows you to do something in your corner, in your method, and you use all these things, but the whole process is not changed by that. Another possibility which is much more lightweight is PYUCA that you see that sorts nicely, but that sorts according to some English rules because it doesn't define which locale it has, like which language. So they have some better sorting than the default sorting according to Unicode at least. But this is not optimal for every language. But if you need one general list, you can go with PYUCA. Now, regular expressions, if you have a problem, use regular expressions, now you have two problems. But anyway, let's say that we want to extract from this word Munich 1-3. Munich is the German name for Munich. Munich 1-3, and we want to extract the name Munich. So if I import regular expressions and then I ask for or characters A to Z, A to Z, then it will see the N, capital N, and then N. It doesn't see the U because it doesn't belong to the list of A to Z. There is backslash W which finds the U, so it finds the word Munich, but it also finds all digits. I'm not interested in digits, I just want to see Munich. So how can I extract Munich in a regular expression? There is third party library RGX that works identically to the standard RE, but it has some more functionality. And in this case, that's the possibility to ask for backslash P, which is a special character from the Unicode list. And in the L, the capital L, as you remember, is the category for letters. So if you have L, it's any letter, L, U would be uppercase, L, L would be lowercase letter. And this is how you can actually use regular expressions to find words that contain some other characters that are beyond ASCII. So I came here for Python, but I said for the names, so that was the programming part. Now let's have a look at the names. I cannot see you, but you can just raise your hand if your name fits into first name, last name category, mine fits. But maybe there are some people here who have some middle name or that have some petronomic surname, metronomic surname like in Spanish, who are good at... So these are names that are... this is not last names, there are two last names or two surnames. Maybe there is someone from Hungary or from Eastern Asia who has last name first and then first name last. Are there any popes or queens or kings here? Maybe somebody who has only a name. Or for example, from the Nordic countries, if in Iceland someone is called Sigur and their father is Johan, so this person is called Sigur Johansen. But Johansen is not the last name, Johansen is the petronomic name. So this means that Sigur Johansen, you can call them Sigur, you can call them Mr. Sigur Johansen, but you don't call them Mr. Johansen. And in the alphabetical case, they are not under J, like Johansen, they are under S, Sigur Johansen. There are names like different forms of names. So for example, in Czech and Slovak, the names for the masculine and the feminine forms of names are different. My name is Sherevi, all the female's names of my family are called Shereva, grey-haired, but this is the grammatical form of the feminine form of the grammatical form for this adjective. All the substantive, all substantive in Czech and Slovak, if someone is called something like Mule, the females, women, they are called Mule Rova, there comes an ova at the end. And this is not only for Czech and Slovak names, this is also for other names, so if you read Czech or Slovak newspaper, we will see Angela Merkelova. Sometimes Merkelova is still okay because Merkel sounds like a name, last name, that would be acceptable also in Czech and Slovak, but there are also some names from Africa, from Asia, that are grammatically not compatible with our language, and that get always the ova at the end. And of course, if you have some title, phonon 2, or some economical title, which is a part of your name, then this is also more complicated to decide whether how to write it in a form, because if a form asks for a first name and last name, and where do you write your title, or your second name, or patronaming, patronaming, or the parts of your name. So actually what I propose, suggest is to have one form for full name, where you write the name as it is official, on your passport, and then how should we call you, so in my case full name, Miro Slavšiđivi, and you can call me Miro. And there are some other people who make it really clear on how you should call them and how you should write their names. This is what I see sometimes when I write my name on some forms here in Germany. Please enter characters from the European character set only. What is European character set? Slavšiđivi is a central European, Czech or Slovak name, and it contains only characters from the central European or from the European character set. Bitter gabens in England for guilting and false-tuning in German, please enter a full valid name. My name is valid. A name of a person cannot be invalid. So if you program something that has to do with names, I'm not speaking about GDPR, I'm speaking about common sense, don't assume anything. Don't put random limit on the length of a name. There are short names, there are long names, there are very long names. There are names that are so long that even if they make a typo on Wikipedia, the person won't notice it. Don't use stop words. If it is a stop word in your language, it is probably a perfectly valid name in another language. As I told, family members don't have necessarily the same family name. In my case, all the names in Czech Slovak polish in other languages. Different transcriptions from non-Latin alphabets. So of course, if you have the Russian name of Czech, every European language is right. They are differently, the same with Chinese. On the other hand, I went to Russia twice with the same passport and I got to my visa. And on the visa, there was always my whole name in Latin and in the Cyrillic alphabet. And on those two pages, the transcription, the Cyrillic transcription of my last name was different. Also, the Russian officials, they see my last name and they try to write it somehow to transcribe it into the Cyrillic alphabet. So it works the other way around. The men change their family names too. So if in your form you ask for the maiden name or name, it's probably not what you want because there are plenty of men who after they get married, they change their family name. One letter name is probably not an initial. So Benoit Beilandre, the French guy who did quite a lot of beautiful stuff with fractals. The B is not an initial, it's just a B. So probably everything that is printable is probably fine. So you have to expect anything in the name. If you have heard about this guy, Christopher Nal, hello, I'm Mr. Nal, my name makes me invisible to computers. If your program has problems with that, I'm sorry for that. Someone tried to, or both customized a license plate for their car and they wanted to have Nal on it. The guy thought, hey, this is great. If I get a speed ticket, they are not going to be able to attribute the speed ticket to my license plate because there is written Nal. And at the end, he received all the speed tickets in the county that were not attributed, that were not attributable to some license plate because it just was mapped to Nal. So he received a way too much. If your database has a problem with a guy named Robert, drop table students, okay, see you in the Q&A. I don't see street entities. That's also something interesting because it's like names. If you are in Germany, you know what is the most common name of a street in Germany? Yes, it's Einbahnstraße. Now just joking. Einbahnstraße means one-way road, but many foreigners think that it is a name of a street and when they park a car, they just write down, I am in the Einbahnstraße and then they need quite a long time to find their car again. What you can see in many US lists of directories of companies from Germany is the concept of Hauptstraße, because there are OCRs probably or some other programs are not able to identify the Schaff SS and they write it as an uppercase B. The names of the places, they can be very short like the O somewhere in Scandinavia or the Y somewhere in France. Inhabitants, they call themselves Ypsilonian. So if you live in a place like this and then you get some control question to question like, what is your mother's maiden name or what is the place of birth and it says, oh, you have to enter at least six characters. No, don't do that. And there are some places like Llanwad, Bukwinga, Gjoghra, Vindrobog, Quantasilio, Gjogh, Gjogh that are a little bit longer or Chzonci, Rwarsice, Pogelwen, Kowodi. So you need really much more space and you don't have to stop after 10, 15, 20 or 30 characters because the places have really long names. And sometimes the places that even don't need names because somewhere in Iceland, you can just draw a map on the envelope and it will arrive. There are plenty of things that you have to think about when you are doing programming something with names and that can surprise you. There are some pages like this, falsehoods programmers believe about names. I invite you just read them and you will see quite a lot of interesting stuff that you have not thought about earlier. Your name is invalid? No, your name is not invalid. Please, as a developer, respect the names of your users because their names are not valid. Don't break the locale. So import ICU if you are in Python. You can import from bytes to string as soon as possible and from strings to bytes as late as possible. You can work with strings the whole time. UTF-8 is cool, Python 3 is cool, BQL2 and use Python 3 and UTF-8. And if you tell the user your name is invalid, you will land on the Twitter account. Your name is valid. Actually, this is also a limit of Twitter because you can have an account with maximum 15 characters. So your name is invalid, it wouldn't fit there. So there is a Twitter account. Your name is valid. It would be nice. Thank you very much. Miro, thanks for your interesting talk. Thank you. It was a pleasure to hear what kind of problems you can have with names and encodings and how you can circumvent it as a programmer and you really should circumvent it. I have some questions from the audience. They go specific to non-letter characters. Which characters? Non-Latin. Non-letter characters like the upper straw and it's called M-. What do you think about? They should be allowed in names. How do you handle that? At least in passwords, it looks like they are allowed. If they are a part of a name of a person, so they are valid. Yes, apostrophe, numbers, maybe, all of that should be allowed. Yes, as I thought, you have to accept almost anything and deal with it. Okay, we have a question from the audience. Really thank you for your talk. Interesting for us all. Sorry for the problems with the stream we had. If you missed something from the stream, just go to the recording afterwards. There will be a full recording of this session including questions and answers. Thanks again, Miro. Thank you very much. Bye.
Names of people cannot be invalid. People have names. Most people do. People have first names and last names. Many people do. People have any sorts of names that often don’t fit fixed fields in the forms. These names may contain letters, accented letters, and other characters, that may cause problems to your code depending on the encoding you use. They may look differently in uppercase and lowercase, or may not be case foldable at all. Searching and sorting these names may be tricky too. And if you design an application, web form, and/or database dealing with personal names, you’ll have to take that into account.
10.5446/52024 (DOI)
Welcome to NIDA C3! Hello and welcome on the Franconian.net stage. We now have the pleasure to present the next talk called Fathers Like Lego, held by Andrea and Dominik. They both are developers for AFL++ and they are creating a new fuzzing library. If you are new to it, fuzzing means to randomly input data into a program until it starts to crash. From there on, the input data will be changed to trigger more crashes or specific behavior. Have fun! Welcome to our talk Fathers Like Lego. We, that's Andrea Fioraldi and me, Dominik Mayer, are going to talk a bit about building blocks today. Not like these, we all know that Congress loves Legos, but we're going to talk about how software, and specifically fuzzers, an automated scheme to test software for security vulnerabilities, can be built and brought together. So, first, let me introduce ourselves. We're both academics doing our PhDs at universities around Europe, and we both play CTF, and we're both part of the AFL++ team. For whoever is not really familiar with fuzzing, AFL is one of the most well-known fuzzers around, and AFL++ is a fork that is maintained by a group of four people, Astu, Mark and Haco. We managed to increase the fuzzing performance, overall performance, execution speed, and also just pathfinding and bugfinding over the course of the last one and a half years, pretty dramatically. Here you can see the Fuzzbench experiment summary. Fuzzbench is a pretty good offering by Google, giving fuzzer authors the possibility to test their fuzzers against real targets. Here you can see AFL++ 260 AC in comparison to AFL++ 3.0, which is the latest version and which is pretty advanced in comparison to the old-school AFL when at the time we forked it. So, yeah, would we want to say that AFL++ is the best fuzzer around? Well, we get many Fuzzbench points, so yeah, we have a great success here, but the truth about fuzzing is that AFL++ isn't the best fuzzer. Actually, the best fuzzer is Home Fuzz. The best fuzzer is of course the Freedar API fuzzer, which uses Freedar to fuzz. Oh yeah, the best fuzzer is LibFuzzer, which is included in the Client project. The best is Unicorps Fuzz, which can fuzz pretty much anything that you have with a CPU. The best fuzzer is Fuzili, which fuzzers... No, the best fuzzer is Domato, which is specifically for browsers. No? No? Okay. Well, then, I know, usually the best fuzzer is your custom fuzzer, which is tuned for that specific use case and adapted to your specific needs. So, it may come with custom mutations that are only useful for weird XML dialogs of your target, or there's no off-the-shelf fuzzer for this specific weird architectural thing that you try to fuzz. Well, there will be a bit of a problem here, because if I can't use an off-the-shelf fuzzer, how would it create a fuzz? Well, the usual way is to just fork an existing fuzz. That's why we have a lot of AFL forks out there. Yes, I know it's funny coming from a guy who just told you all about their author, Mayfell fork, but it's true. Our AFL++ fork actually tries to incorporate many of these, but of course, it's impossible to incorporate all of them. Well, the other way is you can create a whole puzzle from scratch, which of course works, but that way you don't reuse any existing code. You will have to spend a lot of time just like doing basic engineering things. You adopt different techniques from different fuzzers. You have to stock it up. You reinvent the wheel, and in the end, you will end up with a more naive design. So getting to the point of specialization and to a performance of something like AFL, AFL++, or HOMFAS, it takes a lot of engineering effort. For a weird target, you are not going to put that much effort into it. And then lastly, you will not be able to just take your one core fuzzer and scale it to many cores, but you will have to use machines with ease without even more additional engineering. So, I mean, it's in the title of the talk already, but how would you go about building something that's reusable? Well, our solution is a fuzzing library. Our goal is it to build a library that can be used to develop cuts and fuzzers quickly and easily. The library offers you basic blocks that can be put together to a proper working fuzzer, and each of these blocks can be exchanged, can be amended. The community can add their own blocks, and you can then put the perfect fuzzer for your target together with not too many lines of code. Now we're going to go through each of these components in fine details. In the following part of this talk, we present the concept that we define to abstract the properties of fuzzers. We will give some example, very simple example that related to fuzzers that you should already know, like a fuzzer and so on. So we relate this abstraction to some possible implementation. We will show some bits of this entity's translate to code that is Rust in our case, and we hope that the community will profit and learn about this new vision of fuzzing as building blocks. And all these parts should be swappable, can be revisited, without any kind of problems with the other parts. So for instance, you can define a new type of mutator, swap it with the existing mutator, and all the other stuff works without problems with your new mutator. The first component that we will discuss is the Observation Channel, or Observer as an abbreviation. That is the entity that provides some information about the specific run of the target. This information that I have inside the target, we usually read only, so the fuzzer doesn't use Observation Channel to instruct the target, the fuzzers just observe, so it's a passive channel, and it is usually deterministic, but not in all the cases. A very, very straightforward example that you should already know is the fuzzer Observation Channel that is a map in Azure Memory that logs in each bucket that represents an edge in the control program, the number of execution of such edges in the current execution of the program. Another very simple observation channel used by official shell fuzzers is the execution time, when the ATAS case is filled to the target and the target is run, fuzzers measure time, typically in milliseconds, and for each of these cases, how much time is needed to execute it. But apart from this very common type of observation that fuzzers do about targets, as the spirit of the library is fully abstract, and for instance, a new type of observation, like in this case, Richard Billi-Biti, was the program point, you define the code, target program point, and the observation data of the Observation Challenge Channel is just a Boolean that says yes or no if the current run reached this target program point. We discussed the observation channels that are deeply connected with the target and the data live inside the target, we said, but all we can instruct the target about doing something, and the component for this purpose is the executor that basically instructs the target about the current input and run the iteration, run fuzz case, runs the target with the given input. For instance, if you target the reason it runs inside an emulator, the executor will place the input in a determinate area of the memory in the emulator and start the execution of the emulator to run the target with the given input. This is just an example. The executor is deeply associated with the Observation Channel, in our design, they are even contained inside the executor. A possible example, the most simple example of executor comes from Ibs Fadzer, he's the in-memory executor in which the input is just passed as an argument to an Arnaz function, and the execution of the target program is actually the execution of this function, it is the most simple possible executor that you can imagine. Another more complex example that you know is the Fox server of the FL, that is a more complex mechanism to control the target using inter-process communication, pipeline between two processes, that one is the Fadzer, there is an intermediate process that is a copy of the target, that each time that the Fadzer requests a new execution using the pipe, fork itself in the target child that is the actual real target process that is Fadzed, and when the target child exits, it is communicated again to the intermediate Fox server that community gate the outcome using another pipe to the Fadzer, so there is the double-indirection executor in FL. Now we discuss the feedback entity that is the entity that managed to handle the data inside the observation channels. The main purpose of feedbacks is to produce a fitness value that say if the state of the observation channels are interesting, that it means that the case that is related to the last execution of the executor is interesting, that means that it can be added to the corpus of the Fadzer. This corpus is evolved during the Fadzing algorithm, it is for instant use for mutations and so on, so it is a score, it is a function with a state that assigns a score to the executions, and if the score is okay, this case is added to the corpus. A very straightforward example of feedback is the one that is using almost all of the share files that is the Maximulation Map. There is a map like the map that is in the Map Observer, but it is inside the Fadzer this time. It is an history map that keeps track of the maximum value since so far in all the cumulative observations made in all the executions. In this case, it is the final interesting when the state of the map observation channel is as an entry that has a value that is greater than the corresponding entry in the history map in the feedback. When this happens, the entry in the feedback is also updated, so it is always the state of the feedback is always evolved. A very simple usage of the Maximum Share Map is to maximize the coverage, the number of the execution of the ages in FFAL, for instance. The code is very similar to this one. If there is an entry that is greater in the observation channel, and that is greater than the one in the history map, the history map is updated and the fitness is increased. The very same feedback, so the very same code, the very same implementation can be used to do a very different job, that results in a very different outcome of the Fadzer, but changing a very few lines of code in the target. To report instead of the number of the execution of the ages, the size of the locations, for instance, in one of the possible usage of an Maximum Share Map, as we want to maximize the size of mallocs to spot out-of-memory bugs, for instance. Now, Fadzing is an objective that in most of the cases to find a violation of some requirements, like crashes, timeouts, the evolution of some invariance of the program, and so on. But as we like abstraction, we define the objective of the Fadzer as a set of objective feedbacks, that are just feedbacks that are just like the normal feedbacks that we discussed, but that provide an interesting result to add the case, not to the corpus that we use in the Fadzer for the evolution of the state of the Fadzer, but in the objective corpus that is the corpus that is not used anymore by the Fadzer, it's just for the user that they say in this set of the cases, there are the cases that comply with your definition of objective for the Fadzer, for instance, in this objective corpus there are the cases that crash the acquisition in the normal case, or in a more strange case, we reuse again the example of reliability, we can define the reaching of a specific program point as an objective of the Fadzer, and so for instance we can start fuzzing with invalid input, put a reliability condition in a portion of the code that we know is reached only with a valid input, and so in the end in the objective corpus we have all valid test cases for the input. A very important entity of course in fuzzing is the input, that we define not as the input that is expected by the target, but as the input that is used inside the Fadzer, what this means that the input is the structure that the Fadzer expects to easily manipulate the input, this structure, and then communicate the target input also with another format, for instance, if we define an input as a thrusher with fields and amps and so on, maybe the target expects by the array the Fadzer can easily manipulate this structure inside the code, but then when in the executor we instruct the target about the input, we serialize this input structure. A complex example of input can be the abstract sensor tree, imagine a grammar based Fadzer, and we store these test cases as tree, not by the arrays, then maybe the target we expect by the array, but inside the Fadzer we manipulate the tree with tree operations, so I can wink and append and so on, for instance in a mutator we can swap nodes and so on, and when the input must be put inside the target, it is serialized to the format that is expected by the target. Another component is the corpus, we discussed about it already, we talked about it as a set of these cases, but it is not just a set of these cases, in our model our corpus is the place in which there are stored interesting inputs, and their metadata, all the information that are not internal of the input, like the execution time, the new entries of the coverage map that are discovered, the first time that this case was added to the corpus and so on, so these properties are related to the test case that is in the corpus and just to the test case in this corpus, they are external, so they cannot be stored in the input structure, but the definition of corpus doesn't end here because it defines also a policy about how the Fadzer should request this case to the corpus. The Fadzer, each iteration requests a test case from the corpus, and the most naive implementation of this policy is a random policy, when the Fadzer requests a test case the corpus gives a random test case, but you can also follow the after approach and each time that the Fadzer requests a test case, you can serve a test case with a first in first out approach using a queue, and so on. The mutator component, of course, is very important because it is the most used way to generate inputs in feedback driven fuzzing, and the definition is very simple, one of more tech inputs are taken and a new derived input is generated. It can be very simple, just one mutation, a bit flip, or complex with a scheduler-mutator that applies multiple mutations, and also some scheduling policies that can be defined to apply this mutation, and the two policies can be how many mutations the mutator should apply, each run, and which mutations it should apply. In a basic Fadzer, like FL, these policies are random, but in more advanced solutions like MOPT, scheduling algorithms can be applied like MOPT, which mutation is selected using an history of the effectiveness of each mutation, and so a mutation that leads to interesting inputs are more priority in the selection of mutations. Deeply connected to the mutator, there is the generator that generates inputs this time from scratch, and takes the input on parameters, for instance the probability to expand some rule in a grammar, when generating a case from scratch using a grammar. Generates of inputs are used to generate initial corpus, in case in which the user doesn't provide the Fadzer in an initial corpus, or part of the mutator that has a mutation, it generates from scratch a part of the input, not just the entire input and so on. In some strange cases, can be used also as a post-project step in fadzing. You evolve not a real input representation, but a set of probabilities, and before sending the input to the target, there is a post-process stage that generates the actual input taking this set of probabilities as a parameter. The most simple policy to generate an input can be a random array with some other requirements, like just printable bytes and so on, or even computes generator, so grammar-based generators that are used, for instance, in night use, also to mutate inputs, because one of the possible mutations on night use is to take a node in the input, replace it with a new generated separate tree from scratch, so the mutator uses itself a generator as a mutation. A very abstract component also is the stage that is defined as the entity that operates an option on a single test case, that this can mean, of course, nothing, it is not explained. The idea is that the Fadzer requests the test case from the corpus each iteration, and the stages are all these actions that are upweighed to just this single test case that was taken from the corpus. The most simple stage that you think about in fuzzing is the mutational stage, that there is a loop, and in this loop, the test case is mutated using a mutator, it is executed, and it is evaluated using the feedbacks, and if interestingly, to the corpus. But it can be also more complex, you can use a scheduling algorithm to intelligent, we know, how many iterations this loop must do. This is what we explore, the topic in the literature, for instance, there are a lot of words that try to maximize the effectiveness of fuzzing, selecting these number of iterations in a file, it depends on the perc core, and there are, for instance, words like, I felt fast that find several scheduling policies, that depends, for instance, on a rare portion of the codes and so on, to give more iteration to interesting input and less iteration to shallow inputs. Another example of stage is the analysis stage, that is a stage that can, for instance, execute a different executor that perform the debugging and collect the information with the observer and store this information as metadata. And then after that, you can, for instance, run another stage that make you have this metadata that can be, for instance, comparison values extracted from the target to do mutation, for instance, that is a common analysis stage using fuzzers. If you know a fuzz, another example is the trim stage that try to minimize the size of these cases while maintaining the same coverage, and so on, you can define cell also, or other different type of page-wise calibration and so on. Now, I presented in theory what for us are the core concept, core entities behind feedback doing fuzzing in an abstract way, so they can be almost translated to code, not so it's more easy to say that to do, but they can. And, but of course, they are just theory and to provide a real implementation, we need additional components related to the implementation that Dominic will discuss in the next slides. Thank you, Andrea. So these additional components that are not really theoretical background of fuzzing things, but I actually needed for this library to work and for fuzzers to work. Let's take a little look at this. So in our Ligo house, there are these parts on the left, you see here already there's display, so we need something like output for the user. You see on the very top, the blue stuff, which communicates with some outside world. And then there's also internals of this green thingy, which is the random number generator, for example. So we actually benchmarked a lot of random number generators, it makes a huge difference of for execution speed, which are in G you use, then we have of course, you know, the state pretty easy to explain right, it's the state. So it contains corpus and the feedbacks and all entities, we had to split it up a bit so the corpus is not like a separate thing. So it works well together with rust. But from a core concept, you have the state. And each time you run an input, it may evolve the state and parts of the fuzzer. And then last but not least, and pretty importantly, we have an event manager. So each new event that occurs during fuzzing will be sent out during using an event manager. We have different applications of event manager, one of which just displays the changes to the user, which would allow for single core fuzzing. And another one, which is called low level message passing event manager, which can pump out each event. So each time a new test case is found or metadata is added to a corpus. And we can pump out this event to all other nodes that are passing as well. This makes it easy to scale to all cores. And in the future may even allow us to scale across different machines. The nice things about LLMP is that it's implemented using shared maps. So very rarely does any fuzzer instance need to go and ask the kernel for something. Usually everything is inside shared maps. And all of the nodes that are fuzzing just listen for changes on the shared maps as in terms of code. Well, the initial implementation with many of these, well, we involve them a bit, but many of these concepts was done last summer by Rishi during our Google summer of code, if a plus plus a summer of code. And that may allow us to test out all of these ideas. However, C is just it's showing its age, let's say right now. So it doesn't have generics, which is pretty useful for this kind of use case. And it doesn't really allow an easy way to object oriented programming or any abstractors. So what we did was we wrote the thing in C++, which is the logical next step. And that leads us to a lot of virtual functions, which is slower. And then many different templating craziness to get away from the virtual functions. So here we are today talking about rust implementation. To conclude after a few weeks in depth rusting, we can say that some language features are still missing. But in overall, it's more legible and still performant in comparison to C++. We looked around the rust community and we found a pretty good keynote from RustCon 18 about gain development. And the concepts we found inside this keynote, we then translated to Rust patterns for our fuzzing library. And then we ended up with a game state kind of thing, which is called the fuzzer state, which I already talked about earlier. And the fuzzer state contains feedback, executors, corpus and stages, all the things you heard before. And then one very special, unrusty rust part is an any map. So there is the reason is that there are no real downcasts in Rust. So what we have now is a hash map that we put anything in that hence the name any map, and we get out the type that we put in later. So that way, each part in the fuzzing pipeline, each part of the fuzzing pipeline can store and retrieve data at any point in time. And then we evolved this concept further to hard scale like tuples with the nice benefit that accesses already checked at component. And of course, what we really came to love is certain, which is the serialization and digitalization in Rust. That takes a lot of effort of writing digitalization away from us. So let's go back to scaling. We saw that it's just spawning a single thread on the G-Lib C will enable footex on write, which makes fuzzing potentially slower. If the target prints. So we just said, you know, we don't have any threats. We have a single process in our example implementation. And whenever an interesting test case is discovered, it's synced lock free over these shared map channels. We have a marginal serialization overhead, but afterwards no more syncing is involved. This is pretty cool. And we even added an option if all of servers are the same for each client, we can reuse the results and we don't have to rerun the input for target on a different fuzzer. So you can still have, you know, you can have clients that are not the same shape. So you could have one client that has, for example, additional instrumentation. In that case, you would have to rerun. But if they're exactly the same fuzzer, they will just take over the test case from the other client and go on fuzzing with the new test case. You can see here that everything on these 80 cores is green. Green means there is not much kernel involvement, which is what we strive for in fuzzing because in AFL band, it's known that the kernel will usually sow down your fuzzing if you use it a lot. And you can also see here in this chart in the middle that we actually scale pretty well. So it's almost linear for LibPng. We have over 10 million xx on this machine per second. You were probably wondering, well, now you chose Rust, fine, but I don't like Rust. Can I only use this library? I'm a Rust person. No. So the cool thing is this is only the core. Our LibPng test harness already uses C++. We already have a version using QEMU mode, which also works. It integrates flawlessly according to our tests. Plus, the library is completely no standard lib and alloc, which may tell Rust people something. It basically means you can include this potentially in a kernel later and almost all targets that any static clang binary can be built for. The bad part is it's not done yet. We tried to finish it for the talk, but there's still some little engineering details here and there that we want to finish before we want to release it to the public. So you can already look at the old C-lib, which already has the scaling aspects of it. And you can expect the Rust rewrite to come really, really soon. If you're super interested, shoot us a message. We will let you know as soon as possible. We can also give you early access, maybe. And Aperplus, but we'll stay around for the normal use cases. So if you just want to pass, you can still do it with Aperplus. Thank you all for listening. We showed you the main fuzzing building blocks and we showed you how we translated to our library right now, which will be out very soon. We think that we chose good defaults and we will provide examples like libfuzzer-like example that you can just use and adapt to your own harnesses. And so follow our GitHub project and us to know more about updates and enjoy the rest of RCE3. Thank you so much. Okay. Thank you very much for your talk. And now let's have a look at the questions with Andrea and Dominik. Right now, as I see, there are no questions asked in the chat. So I have like two short questions that I would ask. And if you still have questions, we have a little bit of time, so please ask them right now. And you will still get through with it. So the first question you already somewhat asked it. So it seems to be possible to fast applications that aren't written in Rust, right? Yes, totally. So the core is written in Rust, the library, but any place that you can link Rust against, which is almost anything you can fast in the end. Okay. And the library is no standard lib, so you don't even need to have an operating system around it in the end. Oh, okay. Cool. So you basically could also fast like embedded projects. Yeah, for sure. That's part of the plan. That sounds very good. And one more question for me is right now you said you didn't release the source code. Is there like a place where you can update on the status? Yes. The status as soon as it pops up on our GitHub. Okay. Maybe also the disk or channel and there is the fuzzing disk or channel. Which we usually post information and updates about the project. Okay. So there's a special discussion. I'm not sure if we somehow can add it here. Maybe we could put it later down and. Now we can in the slides that we will release. We can put a link. Yeah. Okay. I see there are some questions coming in. I'll look at the pad if they're already sorted. No, I'll look directly in the IRC. So will the new thing be a tool or is it just a library? Andrea. It is, it can be both. It is a library. Okay. And alongside the library, we will provide some default configurations, defaults, tools that are can be built. Can be built with a library that you can use as a tool or you can use as a skeleton for your own tool that is based on the library. Okay. Great. For example, right now we already have a libfuzzer clone running. So anything that takes LLVM, the standard LLVM input function that fuzzer people will know you can already fuzz that using the, well, it's the library in the end. We have a QEMU also executor that those fuzz snapshots that is very fast and can be used to fuzz binary only targets on Linux. But the idea is that in the end you write your own few lines of code to wrap it up and not have all the command line flags that you now specify for the existing fuzzing tools. Okay. Cool. Another question from the chat is, did you try it yet on bare metal systems? Not that far yet. We know that it builds, but there are some things still left, for example, to use this LLVM P message passing, you need some place that gives you a new shared map. And these, you know, all this harnessing has to be done and we've not tried it yet. Okay. So one more question. I'm curious how pluggable mutators will be if I want to, for instance, for instance, tie in some more protocol grammar. Yes, you can just, if you want to do strut-tripped fuzzing, the two components that you want to override is the input because, yeah, you need some representation of your structure that is not the same. And the mutator is just think about an object in any object-oriented programming languages. There is just define a new class and instantiate a new object of this mutator class. The difference is here is that we use generics and classes are not concepting Rust. There are traits that are quite similar, but not the same thing. Okay. All right. I think right now there are no further questions. And, yeah, so basically that sums it up. If you don't have anything left to say, I think we're done. One more thing about this plugging in mutators. We are thinking about making it possible also to write prototyping in other languages, for example, Python, and then plug them in in like a generic fashion. But then if you really want to do fuzzing in the end, you should definitely use a low-level language instead because the jump to Python is super slow. It is a long-term objective that we have to support Python because, yes, it's great for prototyping and experimenting a new fuzzer. Okay. That sounds very good. All right. Then I'll say thank you a lot again. Thank you so much for hosting. Thank you. And thanks to all the guys on the tech team or the guys on the RC3 team. You're awesome. And have a nice day and enjoy the rest of it. Thanks. Bye. Welcome to the RC3.
From the AFL++ team comes a talk about the core concepts of fuzzing, novel fuzzing research, a library, and parts of fuzzing that can be edited and swapped out. In this talk, we present the theory, building blocks and ideas behind our evolution to AFL++, a powerful and flexible new fuzzer design. Instead of a command line tool one-trick-pony, security researchers will be able to build the perfect fuzzer for their target, and extend parts of their fuzzer with their own code. After dealing with the monolithic C codebase inherited from AFL for over a year, we learned how to build a better toolsuite from scratch, as a library, with reusable components and easily maintainable code. The design of the framework follows a clear division of fuzz testing concepts into interconnected entities. Like LEGO bricks, each part of the fuzzer can be swapped out with other implementations, and behavior. The first prototype, libAFL, was developed as one of the AFL++ Google Summer of Code projects in C. After seeing that the concepts work in practice, we are now creating a powerful fuzzing framework in Rust. This talk discusses these concepts and how they relate to existing fuzzers at the state of the art. Thanks to its flexibility, the library can be used to reimplement a wide variety of fuzzers. We discuss how we tackle common problems like scaling between cores, and embedding the fuzzer directly into the target for maximum speed. The building blocks discussed in this talk will be the engine under the hood of a future AFL++ release, and, hopefully, your next custom-build fuzzer. Download
10.5446/52025 (DOI)
Welcome to NIDAWC3. Welcome everybody to the Franconia NET stage. I have the great pleasure to introduce you to Leonie, who has been active in higher education politics since 2015. And among other things, they have been representative in the European Students' Union and an Executive Comedy Member of the Free Association of Students' Unions, FZS for short. And I also may introduce you to Lasse, who currently studying in Leipzig and who is representative, president to the Arche Student Council and member of the FZS. Their talk goes by the name Let's Get Digital, how the EU envisions the future of European education. This talk will introduce you to current trends in the European Higher Education Policy, with special attention to how digitisation is used to further strategic goals to the EU Commission. I'm happy to give the floor to Leonie and Lasse. Yes, hello and welcome everybody. We are Leonie and Lasse, and we want to talk to you about European education policy in the following minutes. The call... We want to tell you about what the next decade will have in pocket for European education policy, and have prepared a quite extensive agenda for you. We want to tell you first about where this is all coming from and what the EU Commission is doing and why you should care about it. Because those changes potentially have the potential to impact European education policy quite staunchly and also impact each and every university we know and the way we study and research. One of the core things in that will be the European University initiative, which we will tell you in a few details. Afterwards, we will talk about mobility and dimension, and Leonie will talk through that. And then we will go into details of virtual exchanges and internationalisation at all. And also micro credentials, which is one of the biggest buzzwords buzzing around through European education policy right now. We will also tell you about the European Student Card Initiative before we will speculate about the European degree, which is talked about at some places, but it's quite uncertain what this meant by that by now. And afterwards, we will conclude our speech. Thank you. So what is the European Commission doing right now? As you all know, the European Union has come under a lot of criticism recently, and the process of Europeanisation has come to a steady decline, or it's gotten slower in the recent times. You know all the buzzwords, the breakthroughs that's happening, they will be definitely leaving or have been leaving two years ago and also will leave the common market on the 1st of January in two days. But also the Piyspate Party in Ukraine, in Hungary, or the Piyspate in Poland are blocking further integration of Europe, but also on the international stage Europe has come under some scrutiny. And so the European Commission and the European Council and the other institutions have come to the conclusion that further steps to integrate Europe even further are needed. And one of those could be, for example, the further integration of the markets, trying to enroll the euro onto other countries which currently don't use the euro with the currency. But also one of the core principles in which further integration shall be advanced will be education. And so all these principles have been lined out in a speech by Mimana Maikron, which she gave on the 26th of September 2017. His words were actually quite drastic and I just highlighted that he's been talking of the European Civil War, but also of the other side of the Atlantic and quite concisely described the problem from his perspective. It's surely also a reaction to his victory over Marine par in France. So he said that the European Civil War is halting the integration and the internal divisions of Europe under European Union have been a problem and further steps are desperately needed to get over that. He proposed a very broad array of things and to overcome these internal divisions and to further integrate Europe. And one of the core things he mentioned was culture under which Europe could be integrated. And it's very Eurocentric all in all what he said, but the speech he held at the Saban, which accordingly was called the Saban speech, had quite a big impact on the European Union and what will continue in the following years. And especially in the sector of fire education policy, because seemingly they diverge from the Bologna process and try to integrate things even further, but it's still quite behind the curtain of information. It's still not really breaching big publicity, I want to say. So when we look at the timeline, I just picked the date of the Brexit referendum as one of the things which happened in 2016, you could also put in there the election of Donald Trump or other things, for example the rise of China, which all seem to have had a big impact on the thinking and the worldview of Emmanuel Macron and of the European Commission, so things have been picking up. And then on the 26th, as I told you, of September, Emmanuel Macron held the Saban speech and mentioned for the first time the idea of creating European universities, which will be abbreviated in the future or in the coming speech of Leonien E. as EUI, meaning European University Initiative. On the 14th of November, the European Commission published a communique strengthening European identity through education and culture and laid out quite broad visions and showed that they picked up Macron's ideas and also mentioned for the first time the creation of the European education area, which Leonien E. will tell you about in a few seconds, in a few minutes. On the 22nd of May, the EU Council, the Education Committee of the EU Council, the presidents of the countries, Angela Merkel, Emmanuel Macron, and all the other ones which were in power in 2018, and they combined the idea of the European University Initiative and the European Education Area and decided that they want to have the European University Initiative as a flagship of the European Education Area. This developed quite rapidly into the first call for European universities, which ended on the 28th of February 2019, and 17 European University alliances were selected under the European University Initiative. Another call ended this year and there have been in total 41 European University alliances selected. The German Ministry of Education had a quite big impact on the sole process, and as you can see, it's a very, very rapid one for European policy and in comparison. They were staunchly in favour of creating a European education area, were very, very supportive of the European University Initiative, and also set and followed through with giving money from the German government directly to the European University alliances. Most European universities' alliances which have German universities in it receive money from the federal government, as of now, and that money is handed through the DAAD, the German Academic Exchange Agency. The whole process was very, very much furthered by the German government. In the back documents from the 70s or from the last century, there often has been an idea of creating European universities, mostly from German side, and it has been shot down quite a lot in the last century. From the French side, so that's quite a diversion from the former times and the hope of integrating Europe further by means of education. So now, Leonie will tell you about... Yes, so also I want to encourage you, if you have any questions about abbreviations that we use or the terms, feel free to ask because we know that the whole thing is very cryptic for people who are not spending a lot of their time reading these texts and talking about these concepts, so feel free. And there are a lot of abbreviations going around and they are confusing. So there is the European Education Area or the EEA, which is this quote that I have here is taken from the Gartenberg text from 2017 in November, established a European Education Area based on trust, mutual recognition, cooperation, exchange of best practices, mobility and growth. And when this paper was published, a lot of people who are involved in European higher education policy were really surprised because we already have a European higher education area or an EEA since 1999, it was established with the Bologna Declaration. And interestingly, the basis for the Bologna Declaration was the Sorbonne Declaration, which was done a year before. So more or less saying we need a more new Sorbonne process for Europeanization would be... And many people have seen it like saying, well, the European higher education area, which was also called the Bologna process has failed in a way to create an area built on trust, mutual recognition. So what is the goal of the European higher education area? It is a collaboration between 49 countries who want to build an area implementing a common set of commitments through structural reforms and shared tools. As you might know, the EU has 27 member states and the European higher education area has 49. So there is much more difference between those. And in this talk, we will not go into the whole impact that the Bologna process had on higher education in Europe at large, but we will mostly talk about the impact that the European education area has and in what way it creates a Europe of two speeds. In a way, like a Europe in the European education area where there is a lot of money spent for the different initiatives, which we are going to talk about in the different programs and the European higher education area, which has been trying for more than 20 years now to build the same thing more or less. But the difference of the European education area with what the EU commission is going to say, we are not trying to do something different because they are also want to implement mutual recognition in the field of vocational training and also schools. So that is like a huge difference. The European higher education areas for universities, universities who applied sciences and the aim of the European education area is to address all kind of credentials, education that is happening in the EU. But there is also a lot of worry that the other European countries, which are not members of the EU, might be left behind and maybe Lasse can also address some of these concerns. And when he talks more about the European university initiative. The university initiative, as you have learned, is seen as the flagship of the European education area and shall be new. As I have told you already, it consists of 41 alliances with up to 10 member universities. So the idea is that universities in different places in Europe get together and create a European university alliance. In the introduction, for which I unfortunately forgot to thank you all, it was mentioned that I am president of the ARCOS Student Council. I am sure nobody really knows what that means of you or most of you, if there is no lucky coincidence. ARCOS is one of those European university alliances. It consists of seven universities from seven countries. It has created an internal structure and also some form of the Student Council, which we are still in the process of creating. But we try to represent student interests through the Student Council, which I am president of. In total, those numbers add up and there are 280 higher education institutions involved in the European university initiative in different alliances and all of them get 5 million every alliance, every single alliance, gets 5 million Euro from Erasmus Plus budget, which probably will continue in the next EU budget as far as I know, and also has got the opportunity to use 2 million Euro from the Eurison 2020 budget per alliance. Those 5 million Euros are for a term of three years for the first call. There are universities from all member states, plus Iceland, Norway, Serbia, Turkey and the United Kingdom. I will tell you about the two-speed part in a second when I am going to get to my maps, and I will return to Emmanuel Macron's Sabon speech, where he outlines what he thought and where the European universities are first mentioned. Because in this speech, the goals of Emmanuel Macron are for a higher education area extremely ambitious. I believe he didn't have the intention of fulfilling these goals because nobody can in the short term. In five years from 2019, where the initiative started in 2024, there should exist 20 alliances or European universities. We don't know if there is a difference in this line between European universities and European university alliances, but the central goal is highlighted that all students of these European universities shall be mobile between those universities. In the first call, the goal set by the European Commission was that 50% of students enrolled in an alliance shall participate in mobility between the universities. That means that in Arcos, where again, the president of the student council, we have around 20,000 students in between those several universities. That would mean that 160,000 students would participate in mobility during the study programs, which is extremely ambitious considering that adults have some numbers between 10 and 20% usually. But let's look at how the universities are distributed throughout Europe and to look at how the different speeds are impacted. We can see here a map of the universities participating in the European University Initiative are counted up by a country. Germany, for example, has got 35 universities participating in the European University Initiative, and the smallest number are more than Luxembourg, with both one. As you can see, you can't see Malta, but that's because it's so small, and I will just tell you what's happening in Malta. And Eastland also has got just one, and you can probably get what the numbers mean. What's most interesting here is Turkey, Serbia, and the United Kingdom. The United Kingdom has only seven universities participating in the European University Initiative, but it has got similar inhabitants as France, for example. France with 32 universities is just second in Europe, and also Serbia and Turkey have got some universities. But the two speeds is quite different because under the European Higher Education Area, also Ukraine and White Russia and Israel and Russia are included. And under the European Education Area, it's mostly just the core of Europe plus Iceland, Liechtenstein, and Norway, because Iceland, Liechtenstein, and Norway are participating in the initiative as they would be a normal European member. But Brexit has had its impact, and Britain is just on the sideline of this initiative, which they are not in the European Higher Education Area, e.g. the Sabon and the Bologna process. They are normally part of the Bologna process, but not of the Sabon process, apparently. We can also see the region of distribution of everything, and how many European University Initiative universities exist per student. So the normal number is between 50,000 and 100,000 students per university, which is surprisingly good. When we compare it with, for example, the Excellency Initiative in Germany, which the ratio has been way worse for the students, and considering that students shall be involved, it's quite good. But it also tells us something about the regional distribution. And the most interesting finding here is that Poland is way underrepresented. They have got about 50,000 students more than the next closer one, which would be Greece. And the first bad, worst represented country in the European University Initiative is actually Germany, even though they've got the most, which is because they've got the most students. As you can see, also the United Kingdom, Serbia and Turkey are not included in this graphic because they would very badly distort the relations. On the lower end, the best relations go to Malta and Luxembourg, which is basically just because they just got one university and not that many students, and also Iceland. And one can only speculate whether things have political reasons for these distributions. I'm more than certain that Iceland, Malta and Luxembourg are represented for more or less political reasons, but whether Poland has been so badly represented is just up for speculation. I think it could have something to do with authoritarian government in Poland, but I'm not that good in Polish-EU relations to make a better guess than that. Why Germany is on the third place is probably because Germany has got a reputation in the process of the European University Initiative to be the most eager to involve it. From my experience working inside the area, Germany and the German universities in the initiative are pushing the hardest for further integration, and also which is completely unrelated to whether students are pushing for it or researchers are pushing for it, or the leadership of the universities is pushing for it, because on all these three areas, the persons involved from Germany are pushing from my experience the hardest, which is in of itself a big problem and can become even worse. And now I will hand over to Leonie again. Thank you, Lasse. So I'm going to walk you through some concepts and projects which are important if we are talking about European education policy and things that are like because we have to understand that the European University alliances are more or less an experimental field for the different projects that the EU Commission is doing to further European integration. So in each of the projects or concepts could feel a whole lecture, so I will be very quick about it. And if you have any questions, just post them through the channels. So what is mobility in the context of European education policy? So according to the European Commission, the European ministers have agreed to double the proportion of students in high education, completing a study or training period abroad to 20% by 2020. Support for mobility remains a core focus of the Erasmus Plus program. So mobility for students is defined as either spending a study period of 3 to 12 months abroad, which must be part of the student study program or having a traineeship or work placement abroad in enterprise for 2 to 12 months. And this should be wherever possible integrated of the student study program. And it might not be surprising to you that those ambitious mobility quotas are very hard to fill, not only because we at the moment have a pandemic, which more or less doesn't let people study abroad physically, but also because of the social dimension. So what is meant by the social dimension? More or less it describes all the circumstances that an individual student faces like ability, social and economic background, which might impact his or her or their ability to go abroad or finish their studies. So if the European high education area wishes to address the social dimension, they have to create equal opportunities in higher education in terms of access, participation, and successful completion of studies, studying and living conditions, guidance and counseling, financial support and student participation in higher education governance. So what this means for mobility is that there has to be a portability of financial support. So if I'm getting disability benefits in Germany, then I can also take them to another country where I'm studying and removing barriers, providing incentives. So if I want to have more mobile students, I also need to address the social dimension because there are barriers which not every student can overcome. And one of the biggest barriers to going abroad is the financial situation of the students, but also ability or care responsibilities. And so this brings me to, let's say, something that shouldn't be a solution to the problem, but is used in some cases as a solution to the problem that mobility quotas cannot be achieved. So virtual exchange and internationalization at home are buzzwords that are thrown around about this, and I would call them mobility. So I could make a whole talk on both of these concepts because the problem is that they are used by different people to further different agendas. But generally you can say virtual exchange and internationalization at home both describe international experiences which students can make without going abroad. So this might be digitally or not digitally actually because internationalization at home modules that I have seen have also, for example, had the requirement of students to work in an enterprise or organization where the working language is English to have their international experience. And the important thing that I want you to take away from that is behind virtual exchange and internationalization at home, this is not mobility, how it is defined and how student representatives are demanding it to have. And most people who know what they're talking about would also never suggest to describe these offers as mobility. For example, the German academic exchange service, the RAD, is doing a lot of research and best practice gathering on virtual exchange, and they always say this is not mobility, but there have been pushes by institutions, high education institutions, to soften the borders between mobility and virtual mobility in order to hit mobility quotas and save money because if you want to send more students from your institution abroad, it costs you money in counseling and you will have to get the funding for them. And so, and oftentimes as you might know, high education institutions don't have a lot of that. But the EUA's, the European University Alliances are using the concept of virtual mobility to hit the mobility quotas because as you heard before, Macron wants all students who are part of a European University Alliance to have some sort of international experience mobility. So, it pushes the member institutions of these alliances to in a way try to redefine what it means, like shorter periods that are not three to 12 months, for example, or just counting digital forms of digital exchange as mobility. So, another thing that the European University Alliance is our testing field for and it's going to haunt us in the future is micro credentials. This definition that I have here is from the European Commission. A micro credential is a proof of the learning outcomes that a learner has acquired following a short, transparently assessed learning experience. They are awarded upon the completion of short standalone courses or modules done on site or online or in a blended format. So, micro credentials originated from massive open online courses, MOOCs. And for some reason, the European Commission has decided that micro credentials make education more inclusive, is accessible, a larger uptake of micro credentials could foster educational and economic innovation and contribute to a sustainable post pandemic recovery. Micro credentials can be provided by higher vocational education and training institutions, as well as by different types of private entities. So, there is also like the aspect of people earning money by offering those courses and like big enterprises. There is also like, if you might know that there are micro degrees, micro credentials already offered by Amazon and Google. Micro credentials, as shown in the other three bullet points, are really, really pushed by the European Commission right now. They are a flagship action of the European Skills Agenda. They are included in the September communication on achieving the European Education Area by 2025 and also included in the Digital Education Action Plan. So, and it's really emphasized that higher education should have a larger role in supporting lifelong learning and reaching out to a more diverse group of learners. And this sounds awesome, right? But the problem is that there is no evidence that micro credentials boost innovation and inclusivity and higher education institutions are set up as competitors to other providers of professional development offers such as schools, Volksholzschulen, if you might know them here in Germany. Another problem is that the European Commission seems to have a very one-sided perspective on lifelong learning. So, they really focus on self-optimization and anticipating the needs of the labour market, professional development. Another issue which is not addressed is who's going to pay for it? Because, of course, developing these modules, these courses is going to cost money for the higher education institutions. And I have been part of working on the recommendation of the German Rectors Conference on micro degrees and badges. And not everything is written in there. I do support. And one thing that's written in there that to have cost neutrality for the higher education institutions of micro credentials, they might ask the students to pay a fee for it. And last but not least, Kuiwboenl. So, who's really profiting from micro credentials? Are it the learners? Or is it the employers who don't have to pay for the professional development of their employees? And I think that it's really not the learners who are benefiting if more universities are doing this. But maybe this is something we can discuss in the end. So, another thing which is going to make our whole learning experience in the EU better, faster and stronger is the European Student Card Initiative. So, as it is written in the communique here from Gothenburg that we are mentioning again and again, we support this initiative on mutual recognition, develop and launch a secure electronic system for the storage and retrieval of academic diplomas to facilitate verification of authenticity. So, the goal is to have full deployment of the European Student Card during 2021. Very excited to see that. So, the aim is that the European Student Card Initiative will enable every student to easily and safely identify and register themselves electronically at higher education institutions within Europe when moving abroad for studies, eliminating the need to complete on-site registration procedures and paperwork. So, the EU Commission has been implementing the Erasmus Without Paper Network to achieve this and every institution who's part of Erasmus will be obliged to manage online learning agreements by 2021, manage inter-institutional agreements by 2022 and by 2023 to exchange student nominations and acceptances of transcripts of records. Also, the participating institutions will need to promote the use of the Erasmus Plus mobile app. And by 2025, all students in Europe should be able to enjoy the benefits of the European Student Card Initiative. But where is the card? That's what I'm asking because at the moment we only have an app. And supposedly in 2021 it will be. But there is more. The European Student Card can do everything. It is supposed to give students the chance to access online courses and services provided at other higher education institutions. And over time, it shall allow students to enjoy card activities throughout Europe at discounted prices. It should also be linked to EU's electronic identification rules to authenticate students. So it might be connected to the electronic identification that you are using in your passport, for example. I'll leave it to your imagination what kind of data protection issues might come up with that. But supposedly it will make everything great again. So Lasse, would you like to tell us something about your experiences talking about this? All right. So, okay, I will go on. So to come to the most speculative part of our talk, I have to charge my laptop. Wait a moment. Technical break. So we're going taking you into the unknown of the European degree. So one of the aims of the European Commission, which is talked about a bit more behind the scenes, because it's like more or less a very controversial point, is creating world-class European universities that can work seamlessly together across borders. There will be mutual recognition of higher education and the convergence of grades, medicalation numbers and study programs. So as you can see a bit like the medicalation numbers, convergence might work about the European student card. And I think here we see also like the big difference between the EAGA and the EEA, because the EAGA is really focused on having the different, the differences of organization and structures leaving them intact in the different countries. But if we want to converge all these things, we will need to have our structures be more similar in the future. So one thing that might be a benefit of it is that higher education institutions might be more autonomous from their national governments. There might be easier mobility of researchers and students, and there might student unions all over Europe might be more empowered by it. But there are also threats and problems like the elimination of democracy in higher education institutions, because there are going to be complex multi-level governance structures. You see like here in Germany you can see either like on the federal level, there's legislation on the national level, and there's also going to be EU legislation to consider. And Lasse, would you like to continue? Yeah, thank you. I'm sorry for the technical difficulties. So one problem which could threaten everything is the strong top-down approach towards everybody, which gets kind of like obstructed by the complex multi-level governance structures. Because when we converge with all these concepts like grades and study programs, on a European level it gets way more difficult for senior student actors or researchers to influence the structure of everything. So when the study programs are converged throughout Europe, the Fachschaftsgat of the Faculty Student Council will have problems to bring forward their position, because they have to also consider seven or six other faculty student councils. Also the same problem as with the Excellence Initiative in Germany can emerge, which will build the free-class system of higher education institutions of basically universities of applied sciences, like universities and European universities. Even though all the universities of applied sciences and the standard universities are trying to position themselves as being equal, and still in many places the right to give PhDs to researchers advice with the standard universities, and also as you have seen, the risk is that there is no face-to-face teaching or way less, and that has got problems in other parts of the world, but also some benefits, maybe there are other places to discuss the benefits. Because the time is quite limited, we have to continue with our conclusion. More than the picture. Thank you. That's the only part. Sorry, the only piece. So, digitization is used as a tool to further European integration, like by introducing the student e-card, virtual exchange possibilities, and micro-credentials, which are really tied to digital ways of transferring grades and having grades recognized. But the measures create an uphold a Europe of two speeds. The digital solutions are seen as a cheaper way to hit mobility quotas and social dimension goals instead of addressing those problems in a real way. Also, the impact of the tools and structures which are being put in place is not reflected upon sufficiently, because looking at each of the initiatives projects themselves, I also have a lot of reservations about them which are not addressed, and without significant engagement by those affected by these policies, there is a real danger of being excluded from decision-making processes. So, it is very important to ensure democratic student involvement and the possibility of bottom-up approaches. And I would also really suggest to the EU Commission to think about involving some privacy specialists before they are doing things, because I'm really worried. And now, five minutes after time, we would like to conclude our talk and thank you all kindly for your attention and would be open for your questions now. So, thank you so much for this very in-depth talk. It was super, super informative and very, very interesting. And for now, unfortunately, there are no questions yet, but I think they will come up soon when we process all this input. Yes, I'm also sorry that we were so fast on some things, but it's really hard to know what to not mention in a way. That's super, super okay. I would like to invite both of you, if you can spare the time, to our IRC chat, if there are questions coming up for now. Or do you want to mention anything further information that you can spend with us? I could tell you about how Corona impacted everything, because there has been a quite big impact by now, which is the current thing I think I dropped it out of the European University initiatives. Oh, but we all know that the timeframe and my call laid out is completely blown away by the pandemic. The European University initiatives start, the first one started to work in November of 2019, and most European University alliances haven't had the opportunity to meet before the pandemic started. The one I'm in, for example, planned on having a meeting on the 11th of March, and by then the Italian fraction from Padua was on complete lockdown and couldn't leave Italy. So everything was blocked. And so the big tip for me is if you want to do some mobility and you know that your university is part of the European University alliances, it's not the perfect time to apply for funds because there's a lot of money lying unused around and you can easily go to one of the places because everybody wants you to be mobile. So there can be some special deals and you could just need to ask around in your local institutions where you can apply for these grounds. And also the whole timeframe is kind of gone away and we will see how that turns out in the future, but it won't make no go, will be achieved by 2024. Well, time will come, I guess. Well, I mean, they also wanted to be finished with the Bologna process in 2009. And well, still going. I see, I see. So if there are no questions, I can also talk a bit more about something that worries me about the European student card. Yes, go ahead, please. Yeah, I'm just going to go back. So because if we are thinking about the plans for this and think it to an end, of course, it would mean that there would be a European student ID and it would make it possible to track students across borders. In a way, with all the information connected, and it's like really hard to know what kind of data protection is in place about that. So and if it's also connected to electronic passports, so there are a lot of stuff that is a bit worrying about it. But also what is more worrying is something that Lasse and I talked about is that we kind of feel that European Commission might not know how universities, higher education institutions actually work. And maybe Lasse, you might want to add something about like the communication issues you are facing inside of the European University alliances. Yes, thank you exactly. So the problem is that not only the European Commission knows how higher education institutions work right now, but apparently nobody knows how other higher education institutions work. It goes so far that when we in our student council talk about our structures, we don't have words to speak together and that's not because we are bad at talking English, but because the structures are quite different and all these linguistic connotations get faded away. For example, we are the Light Six Student Union, Germanist coach, student in art, so student council, but it was built on kind of like a post-socialistic view after the fall of the GDR because it was founded on the 9th of November of 1989. So the connotation that it has something to do with like some council democracy gets completely lost, but we also don't know how the structures in Spain are exactly working because it's very hard to translate fashads but into Spanish. And I think that says a lot and we even have problems talking to our Austrian colleagues in German about our institutions because even there they are difficult and different. And so we need new words and we need multilingual lexicans and building a common understanding about what we can do further and even by creating this common language, we are deeply converging everything and the structures get similar and the views get similar. And the student councils influence each other in ways we haven't had thought possible probably. So that's very, very interesting and it will become difficult. What it means for the student card initiative is that the ticket to how to use the transport in mobility in Germany, which is quite common where you pay a fee of 180 euros or something in that bracket and can use the public transportation in your city or your country. It's difficult to tell that to a European commission because German is kind of the only place where that exists and then they want to have like one common culture offer for all students for our bureau but then what happens to the University of Munster where they have a culture ticket where you can use your student card to visit the museums and the libraries and the theaters for free just because you've got your student card. How would you include that in the student card which will be distributed all over Europe so because then the local institutions like the theaters and the museums will say like no way we're going to make it free for every student in Europe to use it if we are the only place doing so. And so the commission is getting into probably quite deep problems with some of the initiatives because there are some implications of them which are going deep into the fabric of what has been created around higher education institutions throughout the countries which is also different from country to country. Very true. So different that we can't even really speak about it. So yeah, language barrier and further difficulties. But I think they're all. Get solve I hope. Well what do you think Nina would you use a European student card do you think it's useful to you. I think it would be useful. I mean I'm actually studying English so for me going abroad would be of course great and I think. I mean, yeah, yeah of course you would use it. Maybe more in the beginning than they are. But as he said like museums, etc. And I think it would be so useful for everybody actually and they would just really not just use it but I'm liking the words. Sorry. No problem. It would be so convenient for everybody. I'm just seeing a question in there are seeing are there any plans to educate students in addition to the subject content or to encourage them to study more than just their subject. What would you think Lasse is there some intention of broadening the curricula. I actually I actually think it's. It's kind of like one of the few possible benefits of the. How so it is. Of the micro credentials. And. This. I mean, it will be it will lead to the creation of weird subjects and I don't. I mean, it's a very, very difficult question because it's so broad and we don't know how to develop it. So the social education and that's quite strict and what they teach and so on, and the participation of the University of Lasse in the European University Alliance and will broaden the sociology subjects at hand for me. Or not for me because I nearly finished with our studies but. You can you can use the social The department of bargain and without many hurdles because all of us shall be. all this must fall between the study programs is kind of the official policy right now. I see. Which we will see how that turns out. Some bonding will happen, but I don't see that the study programs get truly liberated. So you start at the university with like your blank, a blank study program and can just pick what you want. I see, yeah. So we're running a little bit out of time right now. I'm really sorry to cut you off. I would really invite you again to the chat. So if there are any further questions that might go there directly to you. So thank you again for this very, very informative talk. I will rewatch it. Yes, and I mean, people can also reach out to us on however they want. I mean, over, I'm on Twitter. I'm lemon green bird. If you want to find me and you can just ask me questions about micro credentials, for example, which could feel like a whole talk me talking about it. And I think also Lasse will be very happy to answer any questions you have about European University alliances. And we are always looking for more people interested because we think it's important that people with a lot of different perspectives look on this issue and not only like those few people who are experts on that. So thank you again. Have fun on the last day and stay healthy. And we see you on our time. Goodbye. Welcome to the RC3.
The EU Commission is using education politics and digitization to increase European Integration and move beyond ERASMUS+. Current trends in European Higher Education Policy include the European University Initiative, Micro-Credentials, Virtual Exchange, Internationalization at Home or the Student eCard. We want to show you how these ideas and concepts are interconnected and discuss the pros and cons of the current developements as they can lead to more democracy and more cooperation as well as to more isolation. As the common notion of an "ever closer union" has failed with Brexit, new ways are considered to increase Europeanization. Although the idea is as old as the Union itself a new concept has taken center stage promising to create greater cohesion within the union: the European University. In November 2017, the European Commission went public with plans to start a so-called "Sorbonne Process" in order to create an EU-wide common educational area (EEA) in which mutual recognition of qualifications, mobility and improvement of language acquisition are tackled in unison. These efforts include the introduction of European University Alliances (EUAs) and a Student eCard. The latter aims to improve the exchange of bureaucratic information between universities in the EU. Other concepts like Micro-credentials, Virtual Exchange and Internationalization at Home have later been added to these new efforts by the Commission. The EUAs have become important testing fields and launch pads for these ideas and student representatives are increasingly worried about their implications for the future of education. But while these new instruments can fundamentally change the nature of higher education, there are also reasons for optimism. From the beginning on some of the EUAs have included democratic student involvement in all their structures. We hope that these early efforts can lead to new forms of student representation, teaching young students democratic cooperation on a European level and helping them represent their interests even better. This talk will introduce you to current trends in European Higher Education Policy with special attention on how digitization is used to further strategic goals of the EU Commission. The implementation of the European Universities will not be without conflict and conflicting positions: Micro-Credentials, European Student eCards and new democratic structures can lead into a benevolent as well as a malign future. Our talk aims to cover all relevant dimensions and offer you the opportunity to discuss with us the problems at hand.
10.5446/52032 (DOI)
Alright, and a lovely welcome back to the Huck stage this evening. Here to present you a talk about models in science, opportunities and mechanisms, limitations by Markus Völter. He will be talking about scientific models, what they do, how they work, what are they, and they are quite in the news right now because of the coronavirus and climate change. And with models you can do a lot, you can model things, but for the public it's not always quite apparent how they work and what they do. So we have our own streaming site with an IRC and no JavaScript. If you don't like that go to live.hack.media. There you can also ask questions in the IRC or via Twitter with the hashtag R3Huck. And apart from that I think we're good to go. So I'm giving over to Markus. Alright, so talk. It's scheduled for 90 minutes. I might take a little bit longer, but it should be interesting. Alright, so I want to talk about models. I'll cover topics like the difference between analytical numerical parametric models, forecasting versus explanation, abstraction and simplification, statistics, forecasting the past sounds stupid, but it's useful. Sensitivity analysis, optimization, forecasting versus scenarios, fitting of data, fake numerical precision, chaos, emergence and unknown unknown. That's quite a lot of stuff. So I'll run rather fast as you'll notice. The examples will be taken from cranes, pendulums, weather and climate, flight simulators, sheep and ants, gravitational waves, fusion, the LHC, the event horizon telescope, cellular automata attribute, turquoise. And you might ask yourself why the hell is this guy talking about you have some background. I'm a physics engineer and a computer scientist, but more importantly over the last 12 actually years, I ran a science podcast called Omega tau, and just an engineer for about 600 hours. And that's basically where all of the material of this talk comes from. The podcast also gave me an hour in the backseat of an F 16, which of course was the highlight. Alright, so let's get started. Let's get started. You probably have all seen these large steel monsters. And if you look closely at these little traffic light kind of thing, the cabin, it signals to outside personnel the load of the crane, not the weight that is on the hook, but the actual stress on the crane, the load on the crane. And of course, obviously, that means what you'd expect. Okay, be careful and at the limit. And so what is the load of the crane? Well, one is the bending of the boom, right? The boom is a long stick that is of course bent from the weight that's at its tip on the hook. And of course, also the stability, the balance of the lifted mass and the counter mass, I should say counterweight. So how does the crane know all of these things? Well, there's a model that calculates this and drives the traffic lights. And let's investigate what goes into that model. Well, one, of course, there's the weight that is currently being lifted, the more weight, the higher the bending moment on the boom. The length of the boom is relevant. Of course, these booms are telescopic and if they're longer, you have for the same lifted weight you have or mass, you have more bending moment. The angle of the boom, obviously, you can lower it and also, you know, make the angle steeper and the lower the angle of the boom, again, the higher the bending moment because the radius is larger. There are additional wires and poles you can use to stiffen up the whole construction. And if you have those installed, then your bending moment is less for a given lifted weight. And finally, wind, because if the wind moves your lifted mass sideways, this leads to sideways bending of the boom, which lowers the overall stability. So that's for the bending. For the balance, obviously, the relationship between the lifted weight and the counterweight and their respective radii is relevant. So that is something that's taken into account and the distance slash radius of the stanchions. The further out you have moved them, the bigger is the, if you will, footprint of the crane and the more stable it stands. So these two aspects of overall load are calculated separately and whenever one of them reaches, you know, critical level, that is, that drives the traffic light. So how is this done? Well, for the stability for the balance, we simply use the lever principle, right? So you basically calculate the two moments, the actual lifted weight is measured from the pressure in the hydraulic cylinders, the counterweight is configured. And then you know the two radii, you can calculate those as well from the angle and the length of the boom. And then you can just simply figure out whether the crane is stable, the balance is in balance. For the bending, it's a little bit more interesting because here what you basically do is you run an FEM analysis, finite element analysis, where you basically simulate every little discrete piece of boom material and run a numerical simulation, a numerical model that figures out where the load is high and when it reaches a limited any of these discrete small finite volumes. And of course, you can't run this in real time in the crane, which is why the results of this analysis are abstracted into basically lookup tables, right? This is called a parametric model. We'll get back to these model categories later. But that's basically how it works. And so there is an interesting thing that happened this year, or maybe it was last year, this is a 750 ton crane, the LTM7050-9.1 manufactured by Leipzig. And again, it can lift 750 tons. Well, it was able to do 750 until earlier this year, because now it can do 800. Only because of a software update, nothing changed in the actual construction of the machine. In fact, existing cranes can be updated retrospectively. So how does this work? Well, there's something translated from the Leipzig website. It says, we completely recalculated the crane more and more precise FEM models, more and more precise FEM models, I should say, on faster computing hardware, permit a less conservative approach. The latest regulations regarding the load bearing capacity in compliance with polyplicable standards, blah, blah, blah. So basically, they have newer modeling tools, both in terms of algorithms and software and machinery, computers. And now they can be a little bit less pessimistic, conservative about what the crane can lift. And so the software update basically installed new lookup tables, and now the same machine can lift 50 tons more. Regarding the balance, I talked to a bunch of crane operators this year, obviously, and one told me, you know, today it's basically impossible to knock over a crane to bring it out of balance so that it actually falls over. Oops, this is from 2013, where one of these large Leipel tracked cranes fell over in Brazil during the construction of their Olympic Stadium. And the problem there was surface stability, right? So the ground gave in. These cranes produce a significant pressure on the ground. And if the ground isn't suitably prepared, then the whole thing falls over. And obviously, the model that, you know, takes that considers, as we have seen, the levers of the weight and the counterweight can't know about this. So this is a problem. Here's another example of something that went wrong. A large shipboard crane in Rostock, Germany, completely crashed. And so again, is this a modeling problem? Well, no, not really. It was very likely a material fall the production mistake in the hook itself, the hook broke, and then the whole boom basically snapped over backwards, completely destroying the crane, millions of millions of, you know, money lost and several people hurt. Big, big drama. So what can we learn about this? We have to be really careful about what goes in a model. The model sounds trivial, but the model doesn't make any statement about whatever hasn't been put into it. Like, for example, the status of the ground or the material quality of the hook. And these assumptions and constraints must be well defined that they must be communicated to the users. So is this a useful model? It's a Lego, actually, yeah, it's a Lego model of one of these LTM 1750 cranes. It's not useful for understanding the load capacity, because obviously the Lego has no, you know, relationship to the real thing in terms of stability. But you can explain in principle how the crane works, you know, with with extensions and telescopic booms and stuff. So it can serve as a model for illustrating that it might serve as a model for worksite planning. You know, if you have the worksite in the same scale, then why not, right? So this model also has its purposes, but it's not the same purpose as these mathematical models that we've talked about before. So again, the user of a model has got to know what's in the model, what the purpose is, and the modeler has to decide what is important, what impacts the results. So let's define the notion of model. It's an abstract representation of something that exists in real life in the real world, expressed in a suitable language like chemistry, physics, math, Lego should be an Oh, not ah, for the purpose of understanding, prediction, optimization, proof or production of that real world thing, the proof and production part will ignore in this talk, we're going to look at understanding prediction optimization. So when you build a model, you have to think about what part of the reality you want to represent in your model, what is relevant to represent that part faithfully in the model, how to represent and compute, right? I mean, you have to somehow make it quote, runnable, mathematically calculatable or in other ways executable. You have to think about how do you observe parts of the reality that serve as input to the model, and then what is the relevance and trust of the output. Again, if you don't input something about the ground, then you already gave you can only measure that unreliably, then the relevance of the output and the trust level of the output is low, we'll get back to that. So let's revisit analytical, numeric and parametric models. Here's a very simple analytic model. It's a differential equation of the movement of a pendulum, right? It basically says that the angular acceleration, which is what the left term says, basically depends on the angle itself on the sinus on the sign of the angle. And then you multiply it basically by the fraction of the earth acceleration and the length of the pendulum. And so an analytical model is basically where you use precise or approximate equations, as we'll see this one as approximate. And then for dynamical systems, these are often differential equations which relate a quantity to its first or second or whatever, a derivative. Again, here it relates the angle of the angle phi to the second derivative of that same angle. And so again, an analytical model is what we all know from school as formula equations. Now, a numerical model is more interesting because it in practice plays a more relevant role for what we hear about models today. What you do is you discretize your world. In the finite element analysis we saw in the context of Cranes before, you basically discretized the overall boom into finite small volumes of steel. In climate and weather models, you discretize the atmosphere into volumes of air. If you do epidemiology, you have subsets of the population. And then you reassemble the overall system by basically joining all these little volumes and computing a whole bunch of properties for each of these, I call these properties E for eigenshaft, sorry, change that, the properties. And you then again from the calculated properties of each of these small volumes, you reassemble the overall story. And then you also typically, if this is a system that evolves over time, you iterate over time. So again, for the weather, you have small volumes of air, you calculate various properties for each of them, and then you run this thing over time. There's a bunch of constraints. One is that you have to make sure that the changes are plausible both in time and space. What I mean by that is, if you have a weather forecast and it says, at time t equals one, the temperature at some location is 20 degrees, and 10 minutes later, it's 10 degrees, there's probably something wrong. Same is for spatial plausibility. If you have 10 degrees in Schuttgart and 20 degrees in Esslingen city nearby, it's probably strange. So you have to make sure that either by design, the model doesn't jump both in time and space or whatever other discretization dimension you've used. Or if this is an outcome, then your model is wrong if things jump. So basically, you still calculate with equations in numerical models, but these are discretized and often spatially and temporarily. These are the most obvious discretizations, but of course, others might be used as well. If you use smaller boxes, smaller discrete volumes, you get higher resolution. Same if you use smaller time steps. So this gives you increased precision. In some cases, it gives you correctness. For example, in control theory, if you have two large time steps, you might not get a control algorithm that is stable because of its large time steps. It can't see oscillations and control those. So the algorithm isn't just imprecise, it doesn't work if you don't use a fine grained high resolution time. Same thing in some cases for space. If you want to forecast thermals, the stuff that propels gliders up, warm air rising above mountain peaks and you have a coarse grained resolution of your terrain, then these peaks are averaged out and you will not be able to produce correct predictions of how and when thermals develop above those peaks. It's something glider pilots know very well when they fly in mountainous areas that the forecasts are often wrong. And then of course, the drawback of a high resolution is that you need more computational power. And even today, with faster and faster computers, it's still quite plausible. Scientists still have to make lots of compromises in various ways, for example, in resolution to make it feasible to run on existing machines. And then of course, you can compose these, for example, the ECMWF, the European Center for Medium Range Weather Forecasting, runs a worldwide weather model. They use 9 by 9 kilometers horizontal resolution and 500 meters vertical resolution, up to 80 kilometers by the way. And then the German Weather Service, the DVD, they have a higher resolution model, but only for Germany. And basically, they use the more coarse grained model from the ECMWF as a boundary condition for what the, and also means of initialization, initializing data for the finer grained model. And by the way, weather models calculate around 50 of these properties. So lots of stuff going on there. And so you can see easily why even large computers can be utilized completely with this approach. So let's look at parametric models again. We look at cloud formation. You can see a beautiful picture of clouds here. And the question is, you know, what kind of cloud forms, how much of the sky is covered by clouds? What is their lower and upper limit called base and ceiling? And there's a lot of parameters to go in, right? The position of the sun, basically time of day, how much high clouds you have that limit the energy imparted by the sun, then of course, humidity, pressure, temperature, properties of the ground, the elevation of the ground. And so you want to, if you want to compute this algorithmically, there's lots of chemical and physical processes going on. Some of them are theoretically not even known. Lots of small things happening, lots of complex interactions. So basically, what you do in principle, again, is you discretize and you associate all these interesting properties with each of these discretized air volumes. But you can't really run this as a numerical model, because again, some things aren't known, interactions are complex, it would take too long. So what do you do? You basically put all these things into a series of lookup tables, right? So for example, you know properties E1 and E2 from, let's say observations. And from those, you calculate some property X. And then this property X together with properties three and four, you put that into a simple equation, which gives you a property Y, which you then use in another lookup table together with property E5, and you get the result. So this is basically a very simple lookup table and or set of equations. And of course, those are as we've seen with the crane filled by numerical models, analytical models and experiments, right? Experiments, even if you don't know how something works, you can easily just take the experiment results and package them into a parametric model, which you then run to make a forecast. Another example of a very simple parametric model comes from medicine. This is actually the only software example, I think in my whole talk, I built an application in the healthcare domain not too long ago, where, for example, from the systolic and diastolic blood pressure, depending on the ranges there in, you calculate a risk factor, which you then use, for example, together with the age and the weight of the patient to make further decisions regarding treatment. At combining these different kinds of models, this is a picture of a aircraft, which many of you probably know from the movie Top Gun. Although this is not actually a picture of a real aircraft. This is a screenshot from the DCS. Here's another one looks beautifully realistic. So that would be another example of models is how do you model these surfaces and how do you reflect it. But we're going to talk about something else and that is the modeling of the aeromechanical properties of so let's say we are a Maverick and you know, you know, we're flying around in Top Gun with goose on the goose that he's going to turn right. So he'll smash the stick to the right. What happens next, right? How does the simulator simulate reacts when the pilot moves the stick to the right? So we have the stick movement, which basically is translated simply, you know, stick angle becomes a percentage of deflection. And that's the input to the hydromechanical flight control system. And now you have to think about to what degree the arrow surfaces in this case, it's elephants and spoilers on the wing to what degree they are deflected. And interestingly, they actually calculate that in the simulator. And they calculate that based on the current hydraulic pressure in the simulated hydraulic system, because if we 28 shot away half of it, you know, remember Top Gun, then you have less pressure and you had in your hydraulic system and the deflections are slower. Then you get the you find out what your flap settings are, you find out what the wings we angle is. As you remember from Top Gun, the wing, the F 14 has sweepable wings, right, to adapt the wing to different speed table. What is the coefficient of lift for that deflection for this flap setting and this wing sweep? And this is a parametric model, which happily NASA did in the 70s, they put one of these aircraft into the wind tunnel and measured all of these things. And those guys somehow got hold of this wind tunnel data. And now that we have the coefficient of lift, we can basically calculate the actual lift force. And we do that by multiplying the coefficient with a bunch of properties of the aircraft, but also specifically with the speed of the aircraft, which they get from the simulation environment. So there's another calculation. And then they look at again, the wings speak as wing sweep angle, which gives them an inertia. And they know how much fuel is in the wing in the simulator because they know how much fuel the engine has burned since they took off. And they take this information to actually calculate the inertial moment that acts against the movement from the differential lift of the two sides. So, you know, and this then gives you a role, role movement. So the point is that I'm trying to illustrate here how this overall model uses analytical, meaning calculations and parametric models jointly in one, if you will, big calculation, relying on previously calculated numerical data, in this case, stuff that NASA has figured out in the wind tunnel. Now, models aren't just relevant for controlling simulated airplane on a PC. There's also actually a model very prominently used in the flight control system of an Airbus A320 and of course, of all subsequent Airbus airplanes. So the way you actually fly an Airbus is that you move the stick. And what this does literally is that the stick flies an idealized airplane, a model of an idealized airplane in the computer. Then the various attitude sensors and, you know, air data computers, they measure what is the state of the physical aircraft, things like, you know, angle of attack, roll angle, stuff like that. And then there is obviously a difference between the model that the pilot flies, the idealized airplane and the real one. And then you have control algorithms that decide how best to get the state of the real aircraft in sync with what the pilot wants the aircraft to fly like based on its input into the idealized aircraft model. And it's interesting because if, for example, one of the flight computers has failed and you cannot control your ailerons, then you still use the same flight inputs. But of course, the system then figures out how to use, for example, the spoilers to produce the roll rate in order to bring the state of the real aircraft in line with the model. So to wrap up that part of the talk, we have analytical, numerical and parametric models. The models are basically functions, if you will, equations that take a bunch of inputs and you can calculate the output you're interested in directly for every point in time for every kind of combination of inputs. The quantities you calculate with are usually continuous and the equations are physics or whatever else chemistry, biology, but it's the real science in a numerical model, you discretize basically the analytical world, you iterate then. And typically the iteration, well, almost always your iteration will depend on the previous state of, you know, in the last time step, for example. And this means that you can't just say, hey, model, give me the state at t equals 5000 seconds, because in order to know that the model has to calculate, you know, for all 5000 previous time steps, and that makes these kinds of numerical models much less flexible. On the other hand, they're much simpler because solving complex, you know, systems of equations numerically is much easier than analytically. The equations are still basically the real science, but the quantities are discrete. In a parametric model, you can again have a direct look up, it's not expensive, but the quantities are usually much more coarse grained and the equations are not necessarily somehow resembling the real science. They might be completely opaque, right? They might be completely just numbers you figured out somehow, and you know they kind of work and you use them, but they don't tell you anything about how the real world works. They make a prediction, but they don't explain. So let's elaborate on that difference. Let's say you have the real world changing somehow as shown in this lower dark gray box, and the reality changes basically as a straight line. This wobbly change is sensed by your model as a step function, the model calculates and outputs some kind of reaction. Okay, that's fine. That is a prediction. So the model expresses or gives you information about how reality will change as a consequence of whatever impulse you are interested in. How does the weather change if temperature increases? How does the virus spread rate change if 60% of people will wear FFP2 masks, right? Stuff like that. But there's no explanation going on, which raises the question, what actually is an explanation? And I struggled with this quite a bit because what is the difference between the how and the why? Isn't the why also kind of how do you make the difference? And I interviewed somebody from CERN and he gave me a very, very interesting definition of what why means in this context. He says that if a model produces its prediction through applying a more general theory to this particular case, right? If you don't have, if you will, a theory specifically for this problem, but if you have a more general theory and the model, if you will configures that theory in order to produce a correct prediction, then if you look at the configuration of that model, you have the why you have the explanation. Examples for such for such general theories are of course Newton's laws or quantum chromodynamics or Darwin's evolution or some kind of reaction kinetics in chemistry, right? And so this really is a better model because it just doesn't just predict. It also explains why it makes that prediction. It explains the underlying mechanisms, which is again more useful. Now here is an example, a category of models that don't do that, right? You have probably all know what this resembles here. This is a neural network, which has a bunch of inputs and a whole bunch of weights and an output. So the neural network makes a prediction, right? It's the ultimate form of a parametric model because each of the weights here in that neural network and there are millions in real networks is a parameter. And it is not at all transparent what each of these millions of parameters means and what the model has learned. You can test and that's what people do. But if you forget to test certain things, you have no idea how the model will behave. And there is this funny example where people were trying to build a machine learning model neural network to recognize sheep, right? The idea was to, oh, it should be idea. The idea was to make the model learn the shape of sheep. But of course, what that particular model learned was the green of the grass on which shapes, sheep are usually photographed. And so when you showed the thing just green pastures, it also detected sheep. And so basically such an opaque pure parametric model will produce wrong results with some probability for new situations. And the problem is that you don't necessarily know what a new situation is because you have no clue, you know, what the thing has learned. Obviously you test. But testing as we all know from programming, but not sure if we all know, but as programmers know, testing can only prove the presence of the bug, never the absence of bugs. And you have that problem here. So this really doesn't matter for advertising. It's annoying if they try to sell me the same shoes that I just bought, right? Or tried to sell me, I don't know, you know, orange hats, which I will never buy. But it's not really a problem. It is more of a problem if people aren't given credit because some stupid machine learning algorithm has basically, you know, incorporated some discriminatory decision based on stupid training data. And it's really a problem. It's potentially fatal in autonomous driving. So there's a whole thing about, you know, trustworthy machine learning and, you know, understanding what a model learned, but that's in its in its early stages. So let's look at abstraction and simplification again, back to our simple pendulum. And we can ask how quickly will this thing pendule back and forth? And yes, I know, pendule is not an English word, but I thought it's funny. So what's the period of this pendulum? And if you look at the Wikipedia article for this, you get a very long formula and you can see the three dots. So it continues. I did actually a screenshot of Wikipedia because it was too lazy to type this up in a formula editor myself because it's so long. The point is that you can see that this period depends on the angle. So if you, you know, move the pendulum further out, it, you know, oscillates with a different rate than if you move it out only a little bit. And you can make a simple, you keep simplification for very small angles where the sign of the angle is basically the number, the angle itself, sign x equals x, then you can simplify for small angles that the period is two pi square root of L over G. And that is probably the formula you learned in school, which is completely fine. It gives you the right result. If you stay within that range for which the simplification is valid, right? And it gives you wrong results outside of that validity range. So this is another one of these things. If a model has been built for a certain limited parameter range, if you will, or range of inputs, and you use it outside, the forecast is wrong. The obvious example here also is Newton's mechanics because Newton's mechanics are wrong, right? Because Einstein showed that things are quite a bit more complicated with a special and general relativity. But we also know that for our everyday world, because we are at low speeds and relatively low gravitational forces, Newton is good enough. But you have to be aware of this. If you try to calculate the path and trajectories of spaceships with Newton, you will not get the results at the required position that you expect. So another question, will this thing oscillate forever? And if you look at this formula again, there is nothing that seems to slow this thing down. So this formula is actually given with no damping, right? So this thing will just continue forever and we know experientially that that's not the case. Stuff will stop oscillating at some point because of aerodynamic drag of the weight and also because of internal friction within the wire that holds the weight. So actually it looks like this, right? So it will become less and at some point it will stop. Now again, importantly, for your pendulum clock, right? I think it's called grandfather clock in English, right? I'm not sure. It doesn't really matter because you'll add additional energy to the clock every evening, you know, right around target show anyway. And so it doesn't really matter. But if you try to work with this kind of problem, then it is a concern. Not sure if you recognize this picture. It basically, sorry, represents the how a gravitational wave looks like in 2016. The LIGO gravitational wave detectors have heard gravitational waves for the first time. And again, here is here's this thing again. This shows time on the x axis and the frequency of the wave on the y axis and the darker the shape, the more intense the louder the stronger the wave. So they call this a chirp because it basically sounds like right because it gets a high frequency increases becomes louder like a bird's chirp. So how do those guys detect gravitational waves? Well, they have a interferometer, which basically sorry, which means that they send in a laser pulse along one arm, the laser is reflected, passes through a semi transparent mirror along another arm that is orthogonal to the first one, reflected again, and then back to a place where the two laser beams that went through either or both arms are interfered. And if the two beams arrive in phase, there will be constructive interference. Whereas if they arrive exactly out of phase, then the output will become dark. So any difference in length between the two arms will change the interference pattern at the output. That's the idea. And so if the gravitational wave passes by and makes one arm longer and the other one shorter, I hope you can see this on my great wonderful animation here, then you can see this thing basically vibrate the output vibrates visually. Well, I mean, lots of detectors going on, but principally, of course, this only works if these mirrors here on either end don't vibrate just so right. They told me the story when I visited the Geo 600 detector, whenever the package delivery guy drives too fast along the axis way to their side there, they can see the mirrors vibrate and they're telling the post guys to go slow. And whenever a new post guy shows up, destroys the measurement. So what they do is they attach these mirrors to two level pendulums because pendulums actually when they have a high mass have a very good damping about 1000. And so with this triple pendulum here, they can get a damping a passive damping of I think 1000 to the power of three, that would be 10 to the power of nine, right? And then they do some active damping as well. They precisely understand the behavior of the laser. They even optimize and understand and measure and change and, you know, try to limit shot noise. And so my point here is you don't have to understand the damping of a pendulum for your stupid clock, but you do have to understand it very precisely if you want to distinguish the natural pendulum behavior from, you know, environment, postal guys and little earthquakes, if you want to distinguish that from whatever a gravitational wave does. So all models are simplifications, even the one for the gravitational gravitational gravitational wave detector, but how and how far depends on a model's purpose. Let's look at another example. I visited the Wendelstein 7x stellarator fusion experiment in Greifswald and I asked them, so how do you guys model the plasma in the reactor? And they told me, well, it's not so easy. They have one model that models it as a fluid. It's called magneto hydrodynamics. The other one models it as a mix of two gases, right? The electron gas and the nuclear gas, basically. They can also model it as small particles that are disc-like and they can model it as point-like particles that move on this disc, on the circle around on this disc because of the magnetic field configuration. And they use all of them, right? Depending on what they want to do, they use all of them. Obviously, the MHD model on the left doesn't give you detailed information about a particular particle, but if you want to characterize the overall behavior of the fluid as, sorry, of the plasma as a body that is inside the reactor, then that's good enough. And obviously, each of these models have to make the same prediction when you get to a boundary case from which on you use the other representation, another kind of physical constraint. Let's look at this example. I'm not sure if people recognize. This is the configuration of the various accelerators at CERN. The big ring is the large Hadron Collider and the smaller rings are earlier colliders that are now used as accelerators as pre-accelerators before the Hadrons enter the large storage ring. And the LHC has these huge experiments. You see the person standing there at the bottom. These huge experiments which capture what happens when particles from these beams collide. And again, here's another picture here and again, consider the people for reference. The way these detectors work is basically that you have a whole bunch of different detecting elements arranged around the center collision point and when particles fly out, then they hit these sub detectors as they are called and produces some kind of electronic signal and that is then used. Quick comment to the technical people here. I get a lot of noise on my mumble channel. Not sure who should mute their microphone there. Here is an illustration of one of these collision events. You can see how the particles stream in from the center through which the two particles that collide travel. And then you can see how the various collision products stream out and how they light up the various detector surfaces. And what this shows here is the collision of two protons which forms a Higgs boson which then decays to two bottom quarks and to a W boson which then also decays. How do we know, right? How the heck do they know what's happening? What's going on here? So let's first understand a little bit how these detectors work. Fundamentally, what they do is they have a very strong magnetic field and if a particle that has some electromagnetic charge and most of them have, otherwise it's hard to detect them actually, then again a little bit of school physics, Lawrence force and stuff like that, the particle will be bent around a circular track by the magnetic field. And then they have different kinds of detectors. And for example, if you look at a electron here, that's the second example from the top, it leaves a trace in what's called the silicon tracker and then it decays in the electromagnetic calorimeter. And so if you look at the various different particles, each of them leaves traces in a different combination of detecting elements. So this gives you the type of particle. And then through the bending, you can measure its momentum and basically through type and momentum, you get a unique characterization of what's going on. You can even calculate the overall energy balance and stuff like that and the momentum balance and it's very sophisticated, really interesting stuff. So in this case, in the example I gave, we have a Higgs boson that decays in various ways. I've just called these particles A through F here, doesn't really matter what they are. And these decays, they are multi step. So what the detector actually sees is what they call the final state, which is these decay products. What the scientists are interested in is what actually happens in the collision. And so the challenges after observing the final state, the decay products with the detector, you want to calculate back what actually happened in the collision. Was there a Higgs or wasn't there a Higgs? And what is the mass of that Higgs boson? That was the big question back then. And I think 2013, when they discovered it, by the way, Einstein has nothing to do with this. I just used the picture. So in German, we say there are Haken, right? It's not so easy. There are problems. And so I used this picture of a hook. It doesn't really work in English. But so what are some of the challenges they have to overcome? Well, we have seen that the Higgs decays in multiple ways. And each of these decays happens with a certain probability. So they have to somehow untangle that. The decays are non unique. As you can see in this example, this hypothetical E particle can either arise from H to B to E or from H to A to D to E. So just because you see an E, you don't really know what happened in between. And these collisions are not unique, depending on how exactly, you know, how well the protons align in their collision. You either get hard collisions or soft collisions or any intermediate state. And so the created particles, the kind, their energy depends on whether it was a soft or a hard collision and all kinds of other parameters. So that is also a source of complexity. And then finally, this detector, I mean, these detectors are marvels of engineering. They're also terribly complex. Each of these components has their own failure modes and, you know, failure models. And these are complex systems in themselves. So again, how do they know what they're observing? And how do they know that with, you know, with five sigma? Five sigma is a measure of the, basically a measure of the trust in the result. Five sigma is the threshold in physics for a discovery. And five sigma means that, that there is a probability of 0.00057 something percent that the observation happened by chance and not and is not the effect that they assume in terms of physics. And this is also quite related to the P value that's used in statistics. It's inversely proportional to the amount of data, the more data you have, the higher the trust. And of course, also inversely proportional to the size of the effect, the bigger the effect, the higher the trust. By the way, it's interesting to compare these five sigmas to the P values they use in medical trials, right? They are happy if they get 5% not 0.00000057. It's quite a big difference. So how the hell do they do that? Well, statistics, right? They simulate billions of events to understand the distribution of what will happen. So how does this work? So they have what's called an event generator. It's a software model that simulates the physics of decay after a collision, right? So you basically specify collision parameters. Is it a hard or a soft collision? And then this thing gives you a distribution of particle vectors of basically what's streaming out of this collision site. Then they have what's called an experiment model. It's called GN4. It is a complete, well, obviously it's a simplification, but it's a representation of the actual detector and of the physics in the detectors themselves because I mean the detectors work by basically running certain physical processes when the particles interact with the detector material that is simulated in this model. This gives you then simulated detector hits. Then they have basically an electronic simulator that has a noise model for all of the electronics of these detectors. And this gives you simulated raw data and they run billions of these simulated collisions in order to get what they get. What they call the background, which is what would they see in this detector statistically if there wasn't any Higgs, right? So they run these simulations without Higgs, just the other nuclear processes. And then they take the measurements and they compare. And then this is the screenshot of what Atlas saw. And you can see that at 125 giga electron walls, there is some kind of peak and that is the Higgs boson. And they basically find this out through statistical calculations based on huge amounts of data. This is why they have some of the largest computing centers. They run millions and billions lines of code in these various simulators developed over decades. Of course, good old Fortran is still the backbone. They also use Python for a lot. And they have, I think it's terabytes of data. There's a whole worldwide distribution and scheduling infrastructure for calculating and running all these simulations. There's a huge infrastructure behind this in order to get enough data to find out with some degree of trust that in this case they discovered the Higgs. So how can we build trust in models? Well, one way, we can run lots of experiments with lots of data and then do statistics. Now, experiments are nice because they give you controlled conditions. They allow you to isolate various influencing factors by controlling for them. And so you can, if you see stuff correlated, it's relatively easy to derive causality. So, instead of just saying, this happens and that happens. And whenever this happens, that happens. You can also say, because this happens, that happens. That is causality. But sometimes you can't do experiments in climate or epidemiology or economics or many societally relevant questions. You can't really run an experiment. I mean, in some sense, we run an experiment with our atmosphere, but you can't build another planet and run a climate experiment in that sense. So what you have to do then is you have to fall back on just observation where you do not have controlled conditions. You cannot isolate influencing factors, at least usually you can't. And so you really just get correlation. And then you have to make an attempt at deriving causality. But that's much harder. Sometimes you're lucky, right? You have natural experiments. The economics profession tries to use that when they say, in this country, when they did this and that, it gave them inflation. And in this other country, whether it was the following difference, there was no inflation. And so then because of that following difference, they claim this is what produces inflation. But of course, again, you never know what else has changed, right? It's harder. It's maybe a bit easier in epidemiology these days. The factors aren't that different. So maybe looking at other countries that do stronger lockdowns is a good idea. Different story. So what can we do, again, if we don't really have a way of running experiments in that sense? Let's say we want to run some kind of forecast. And we know that this forecast depends on some kind of parameter. P is a parameter that we have to tune in our model for the forecast to be correct. But again, we can't run, we can't validate easily. So we have no data for comparison because we want to forecast the future. Well, the forecast is a good example. Well, what we can do is we can forecast the past, right? So we can move back in time and run the forecast from a point in the past where we do know what actually happened in the real world. And then we can tune our parameter P, in this case, P2, some value P2, is the best setting for P in order to fit our forecast best with the actual data that we know because they happened in the past. And then we can continue using this parameter for the future. This actually happens in weather forecasting. There it's called re-forecasting. And these weather models that are continuously run the various weather services, they continuously basically go back and forth between re-forecasting for parameter tuning and also for filling in inputs that they don't know because there is not a way to find the temperature everywhere above the ocean, only where there are ships, right? And a bunch of weather buoys. So they can just go back and forth to do this kind of stuff. And so this is really the backbone of weather forecasting. So another way of building trust in models is re-forecasting the past. Now, all models are simplifications. We've talked about that, right? And so the question is, if we simplify which of potentially many parameters can we ignore, which are important, right? So here is another model M. It has three parameters P1, P2 and P3. I realize I used P2 as a value on the previous example. Here it's the name for a different parameter. Not a good decision. I'm sorry. So we have three parameters P1 through 3. And we don't know which of them is important. Well, what we can do is we can scan through the ranges of all of these parameters and see which scan has the largest output or the I should say the largest consequence on the output of the model. In this case, we can see delta M for P3 is the largest one, right? So we know P3 has the largest influence. By the way, there might be an even larger influence when we combine the scanning of P1 and P3. So we have a combinatorics problem. I don't know. I forget which scientist I talked with when I asked about that. And they said, well, we don't do these combinations. It's too complicated. So that's another simplification. They treat each of these parameters as uncorrelated with others, which might or might not be correct. Anyway, so we find out that this model here is sensitive to the value of P3. And so what does this mean? Well, it means we have to invest effort or computational power into determining P3 precisely. Or if we can do that again, because we might not have a way of finding the value because we don't have, you know, temperature sensors all over the ocean. Maybe satellites can help, but different topic. Then we have to always vary P3. And whenever we make a forecast, make it for the whole range of P3 and see what this gives us. And this is, well, more or less, this is what ensemble forecasting is called in meteorology, where they run different models, different parameterizations of the same models, and also different values of the input conditions, where they are not known, in order to basically get a probability distribution of outputs, right? So they might vary the temperature at a given point, run the forecast, and they get a probability distribution of the output. And then they either communicate the probability distribution to their users, or they just give you the result that is most probable. There is a weather forecasting service called Meteo Blue. And I like them because if you look at this screenshot from the left, you can see in German, of course, the Treff Sicherheit, the Wetterprognose is middle. The reliability of this prognosis is, sorry, is medium. And now you can click on a button and you now get what they call the multi model. This is essentially a, if you will, sensitivity analysis. It shows you what the different models forecast, and you can nicely see if you actually go to the page how they diverge more over time. They even give you the verification. They basically tell you how good this model was in the last three days. So they give you a lot of information in actually, if you want to invest the time in analyzing which, how reliable the model is. And this particular one is used by glider pilots. I used the example from GAP. It's a favorite gliding site in southern France and people spent half the morning over this and other forecast to figure out what the weather will be. Now, in epidemiology, things are really simple, right? It's exponential growth, meaning the infection rate will grow exponentially with some factor. This B here is kind of what we know as R. It's not exactly the same, but for now it's good enough, right? But we also know it's not really like that, right? If there are no more people left that can be infected or more generally, if there are no more resources, resources, which an exponential growth can, if you will grow into, then the growth has to stop. It's called the logistic growth model as opposed to an exponential growth model. And we know everybody knows now that if you will, the herd immunity is the step or is the amount of infection where there are no more people left to be infected or where at least the infection rate becomes quote inefficient from the perspective of the virus and the growth becomes slower and at some point increased infection stops. So what else could there be, right? Simple. Well, there is this website called COVIDSIM by a company called Xplosus and they have this COVID simulator. And actually, there's a whole bunch of differential equations they use, like the number of susceptible individuals, the individuals in various periods of infection, the number of recovered individuals, the number of people who have died. So it's quite a bit more complicated. Actually, it's way more complicated. There's lots of different dynamical equations. There are intervention effects you might want to model. And there's a whole long range of parameters that you can set and that you have to set, right? And so there's an interesting effect that I figured out when I talked to one of these guys. The ratio at which the population reaches herd immunity depends on how they break down the age groups. Now, I'm not talking about the actual age distribution in the population. Obviously, if you have more people, more people will die. We know that, right? That's our experience. What I'm talking about here is whether you split your population into three groups, zero to 30, 30 to 60, 60 to infinity, or whether you split that same age pyramid into 10 or 15 subgroups, the model makes different predictions. Who'd have thought, right? So this is an example of a sensitivity analysis where you have to somehow figure out what is a good way for this age structuring in your model. There's another nice example for sensitivity analysis in fusion. So one thing they might want to optimize in their reactor is how do you maximize the distance between the plasma and the reactor wall? Because as you perhaps know that the plasma is very hot. It's stabilized through a magnetic field. And if the reactor, sorry, if the plasma hits the wall, well, it might damage the wall, but more importantly, the plasma will be damaged and might collapse. So you want to keep the plasma away from the wall. And again, that's the point of the magnetic field. So one parameter you can tune is you can increase or decrease the field, you can increase or decrease the gas density, you can increase or decrease the number of neutral particles that are injected in as a way of injecting energy, basically, you can change the temperature of the plasma. Point is that it's 10 to the 100 combinations. And how do you do that? Right? So how do you make this optimization? Well, one way is you have to use the simplest of these various representations of the plasma I mentioned before. So this is something they do with the MHD, the fluid representation, because it's the simplest, it's the most efficient. But also, again, even then, you know, we only have 10 to the power of 10 to the 85th atoms in the universe. So 10 to the power of 100 combinations won't work. So this is where in these numerical models, computer science comes in, right? Where you have various different optimization algorithms, numerical solvers, you know, things like hill climbing, simulated annealing, not something I want to cover in this talk. But I thought it was important to point out that it's not just about finding the right model and the right abstractions and the right sense, you know, deciding on the right parameters. There's a lot of cleverness going on here in the programming and how you perhaps then also run this on a parallel computer. By the way, little anecdote, I talked to a meteorologist a few days ago, actually, for a podcast interview, and he said, it's going to be really painful when in the next or next next generation of supercomputers, everything will be computed on graphics cards, because we can basically throw away all our code and have to rewrite everything. So that's going to be painful. So how can we build trust in models? We can vary parameters, do a sensitivity analysis and analyze the impact. Now, I want to reemphasize something because somebody recently on Twitter told me that this modeling for COVID seems to be shit because the forecasts were wrong. And this is a forecast for wrong. And this person misunderstood something completely, and that is forecasting versus scenarios. So let's say you have a model and you want to make the model correct, you know, make the right predictions, but you can't take into account all of these millions of different parameters. You have to find out which ones are the most relevant. So that's when you, as we just learned, you run a sensitivity analysis, right? For example, you want to find out the cloud formation, depending on temperature and humidity, the radiated energy by the earth, depending on serious cloud coverage or the amount of precipitation, depending on layering of the atmosphere, right? Once you figured that out, you put that into your model, and then you can actually measure various data in the world and make forecasts, actual forecasts, that the output of the model claims that this is how the weather will behave. Okay? That's what we do in forecasting. Now let's look at another example. Let's say we want to figure out, or we're going to understand, how the one-to-one infection rate depends on how much virus people shed when they breathe, how the virus shedding depends on the kind of mask you wear, and how the one-to-many infection rate depends on the rate of interpersonal contacts. In this case, you can then also make a sensitivity analysis, run it on simulated data, and then what you get isn't the forecast, because you don't know how people will behave. But what you get is scenarios. You can say that if you guys behave in the following way, then the infection rate will increase. There's no point in saying the COVID forecast was wrong, because it didn't make a forecast. It gave us scenarios that give us hints about how we should behave. And of course, it's a bit frustrating. We have a healthcare model that says that early and hard lockdown works best, because it's easier to slow down exponential growth when the absolute numbers are low. An economical model told us that early and hard lockdowns work best, because shorter lockdowns, for the reasons above, means less total cost. There's also a psychological analysis that says, early and hard lockdown works best, because if you become harder over time, people are already fatigued. They have lockdown fatigue, so they won't comply. So Germany still started a lockdown light and did it too late. So it's quite a bit useless if you have all these models that make these useful and, I guess, rather reliable scenario forecasts, if you will, but then we don't care. But that's just a side note. It frustrates me a great deal, so I had to include that into the talk. All right. So again, going back to building trust in models, we were at this parameter variation story, but there is another way how we can build trust in models. And that is if we do have an explaining model, one that doesn't just tell us the how, but also the why, then we can inspect that model and see if this makes sense. Is this explanation plausible? Does it make sense? For example, if your plasma optimization thingy comes up with a great solution for maximizing the distance of the plasma from the reactor wall, but it predicts a pressure that is twice as high as what your reactor vessel can bear, it's not a good forecast. It's not a good result. But you can only do this because you can inspect the solution the model created for sensibleness, if you will. So you can check explanations if they are there. So let's look at how you get from data to the model. You probably all remember this picture. It's the first ever image of a black hole. Well, it's sort of an image. I should rather say this is an image, but what the EHT, the event horizon telescope, imaged wasn't really an image, right? Because of how the EHT works. Let's see how this works. So the EHT is a radio informator, an interferometer, which means you computationally combine several telescopes and there's lots of computing going on. So in this case, they combined a bunch of telescopes all over the world in something that's called very large baseline interferometry. The very large baseline comes from the distance between the telescopes and that is relevant because the resolution, the whole computational thing can resolve depends on the length of these baselines. So the further you can distance these telescopes, the higher resolution is the picture, well picture. This thing is going to take here is a picture of one of these telescopes. Okay, this particular one, Effelsberg, is not used in the EHT because it can't observe at the respective wavelength, but it's these kinds of telescopes, smaller some of them all over the world. Now, how does this work? So these different telescopes, they look at the same spot in the sky, right? But because they are some distance apart, the distance from the telescope to that point in the sky is different, right? So they see the same wave, the light wave that arrives, you know, with a certain phase difference because there is this difference D in distance offset, right? And of course, because light speed is constant, this is proportional to some time difference as well. Now, as the Earth rotates, and also as you look at different points in the sky, this difference also, sorry, this distance also changes because the angle of the object in the sky is different from each of the telescopes. So if you take these two observations or realizations and then apply a whole bunch of mathematics, which I'm not going to go into, I probably also couldn't in detail, you get the following equation, which again, not important in detail. But what this equation shows is a Fourier transform of the sky. So the EHT actually observes a Fourier transformation, a spatial, it observes the spatial frequency distribution of the radiation across the sky. And then the actual data point is basically there's a, in, we can basically observe this amount of radiation with this frequency, and by that I mean spatial frequency at the following orientation. And then they combine all of this data and the more different lengths of baselines they have, and the more different orientations of baselines they have, the better. But still in any case, they only have like, whatever, seven telescopes, so they have 10 different baselines. And the orientation of these baselines, well, there's only a limited number of these orientations. Sure, the Earth rotates, giving them a bit more, but still it will be incomplete data. So what they'll have to do is they have to do an inverse Fourier transformation of incomplete data. The question they're answering is which is which mathematical description of the object in the actual space domain represents or approximates best the data that has been seen by the telescope in the frequency domain. And so let's look at a simple example of this kind of problem. On the left side, you can see a bunch of measurements, some data. And on the right side, you see a mathematical model. In this case, a horizontal line, a simple equation, y equals y zero, no dependent z on x. And the total error of that model relative to the observation is, you can imagine that as the sum of all the red bars, right, the difference between what the model predicts, what was actually observed. So there could be different mathematical models, right, that approximate that same data. Question is, which one is the right one, right? So again, in the DHT case, what visual shape actually fits best to the observed incomplete data in the frequency domain. And this example here on the right is best, right, because it has a total error of zero. But actually, what this is, this is an overfit. If you put enough terms into your polynomial, you can fit every data with some kind of polynomial. But while it reproduces the data perfectly, it makes no use for prediction, because there's no way you can tell what the next, like if you extrapolate to the right would be. And it's no abstraction, it's not a useful model. It's just the encoding, encoding, if you will, of the measurements as a formula. So by the way, this is what's called overlearning in neural networks, just as a side note. So how do you decide between the three remaining ones? Well, you decide based on knowledge about the system at hand, right, you know something about how your observed system should look like, because you understand the physics, you have a simple analytical intuitive model about what's going on. So in this case, the model of the black hole tells us that between the center of the black hole and a distance of 2.6 times the so-called Schwarzschild radius, you will see black, because in the middle, you look at the black front side of the black hole, a little bit further out, the your light rays or the light rays coming from the black hole are actually bent by the strong gravitation so that you can see the backside of the black hole, which is also black. So we know that, you know, up to 2.6 RS, it'll be dark. And outside there, we will see a glow from radiation that is accelerated around the black hole from the gravitational force. So there must be something glowing and there must be a black thing in the middle. And we even know roughly how big this is, because we can calculate the Schwarzschild radius through other means, or at least a guesstimate it. And so we kind of know what it will look like. Well, it kind of looked like this donut. And so knowing that, we can use this knowledge and basically help interpreting the incomplete data. Now, of course, the problem is, how is that not self-fulfilling? How do we avoid observing exactly what we want to observe, you know, then we don't have to observe it because we know what it'll look like anyway. It will fake it. So what they did here is they used three different deconvolution algorithms to do this kind of inverse Fourier transformation with three different software packages for independent teams who weren't allowed to see everybody else's data. They also worked with synthetic data. They used the same magneto-hydrodynamic simulations of black holes and of other shapes, right? They produced such data of, I don't know, rectangles. And, you know, if the software still produced donut as a result, then something fishy is going on, right? And of course, they still minimized the error of the fitting. So there's a lot of work going into this reverse analysis of the data. It's fascinating to read. There's a nice book by Heino Feike, one of the leaders of the project. I think it's called Light in the Dark or something that talks about that and other things. So again, does the explanation make sense, right? We know how the physics work so we can say something about what we expect. So how far can we trust models? Here's a statement I've read. If we had started the lockdown one week earlier, 36,372 fewer people would have died. This is what I call pointless precision because the precision of whatever your model predicts must be proportional somehow to the precision of the input and the parameterization. And obviously, there's a lot of things we don't know precisely for this virus and how society reacts, right? So this leads us to this observation that there are models that are qualitative in nature. It only tells us tendencies, right? If input A increases, output X will also increase. There are relative models which give you comparative influences, like input A influences the output X twice as much as input B, useful. And then there are quantitative models, which actually give you numbers, right? If you increase by 10% the input A, then output B decreases by 33.3%. Now formula doesn't mean numerical, sorry, formula doesn't mean quantitative. There is this Drake equation, which tells you how many civilizations there are in our galaxy, right? And it's this equation, it has all kinds of factors like the average rate of star formation, the fraction of planets that could support life, fraction of civilizations that send detectable signals, blah, blah, blah, right? It's a formula has lots of factors, so it's numeric, right? It gives you quantitative outputs. No, it doesn't. This is really just a simple parametric model because none of these parameters has really known quantities. We have rough boundaries or, you know, boxings for each of those, but not really. So using an, just because you have an equation doesn't mean that it's a quantitative model, right? It might just be, it basically just says which things influence this thing and whether it's proportional or antiproportional. So let's look at fundamental limitations of this whole modeling business. So first is chaos, which means that a small change in the initial conditions leads to huge effects over time. So you've probably all seen this chaos pendulum, right? It's a pendulum with two hinges, is that the word? I don't know. And if you take the pendulum and put it into a certain, you know, position and let it go, it will oscillate in some way. And if you then take the pendulum and try to put it into the same position again, after a few seconds, it will move completely differently because the behavior is chaotic and you're not able to put the pendulum in exactly the same initial condition. Also, you don't know exactly what other factors influence it. You know, there might be some wind going on or whatever. So the other example that everybody talks about in the context of chaos is like, if the butterfly leaves, you know, starts flying in Brazil somewhere, this leads to thunderstorms in Europe. I did ask various meteorologists here whether they actually think that's true. And I mean, nobody disagrees that the weather is chaotic. But I think they said that this very small effect will very likely not lead to a thunderstorm, although in theory it could. So the point is that both of these systems are deterministic, right? There is no randomness, right? There's no randomness in the physics of the weather, of the atmosphere. The quantum effects really aren't relevant on that level. There's also no randomness in this pendulum, but we don't know the interactions, for example, the friction in these two hinges. And the initial conditions precisely enough, if we knew them precisely enough, then we could make a perfect prediction, right? So, or stated less absolutely, more input gives you a better prediction, more weather stations, more satellites, more ships, more airplanes with sensors gives you a better prediction. So this chaotic behavior of the weather really is the fundamental limit for weather forecasts, because, right, for any given data quality and computational power, right? So that's why I think what they told me was that roughly every 10 years, the useful range of weather forecasts becomes one day more, right? And so that is not just because the computers get faster. It is also just because they have more inputs and of course because they understand the mechanism better. So if we can't predict the weather, you know, for more than a few days, how then will the climate skeptics say, how then can we predict climate, right? This is all fake. They can't even predict the weather. And of course, this is bullshit because climate is a statistical prediction. It's not relevant if it will rain on 5th March, 2038 in Stuttgart, right? That's not what climate models do. They make statistical predictions. And just to draw home this difference, again, going back to the fusion example, we really cannot, because of chaotic behavior, we cannot predict the position of each molecule or element atom in this mix of two gases, kinetic model here. But what we can do is we can calculate the averages. In this case, the averages of these movements is called temperature and pressure, right? And so we can calculate that precisely. But despite not being able to forecast every molecule precisely. So there is an interesting thing in climate science. It's called attribution science. And so what these guys ask is, is this extreme weather event that happened there, is this caused by climate change, right? Really, what they're asking is, with which probability is this kind of weather event caused by climate change at this location on the earth at this time of year, right? How do they do that? Well, they take the current atmosphere, they modify the initial conditions like temperature, humidity, pressure, stuff like that, they scan through these, and then they run the, you know, climate models or weather models. Well, this is the same models anyway, it's a question of parameterization these days. And they figure out with how many of these modified or these initial conditions, will we get this kind of extreme weather event? Then they take the atmosphere without the human impact. They remove the CO2 that we've introduced over the last, whatever, 50 years and the methane. Because I mean, that is relatively well known how much this is. They do the same modification of initials, run the weather forecasts, and they get another ratio of extreme weather still happening versus not happening. So basically, they get different degrees of robustness of that extreme weather phenomenon, depending on whether they run the real atmosphere or the one without human impact. And then they basically take the difference between the two. And this gives you the probability of this weather event being caused by climate change. I thought that's a very ingenious use by climate models. Of course, it also really takes a lot of computational power because you have to run these models lots and lots and lots of times. Now, you could say, well, this whole chaos stuff, it's not really fundamental. It's just because we can't measure precisely enough. Really, there isn't any chaos. It's just because our engineering is too bad. Well, let's look at cellular automata. A cellular automaton is a mathematical abstraction that has a row of cells. A cell can be either alive or dead. And then we observe how the cells evolve over time or generations. So the generations or time is the y-axis. And then we have a whole bunch of cells on the x-axis. And of course, generation i depends on the previous generation. And each value of a cell depends on the value of the two neighbors in the previous generation. So we can see this here in this diagram. We have generation i minus one. And the value of the cell with the question mark in generation i depends on whether the three other gray boxes cells, whether they were alive or dead. And then we have an example rule. The one here, that says if all three predecessors were alive, then in the next generation we're dead. In this case, only if the three predecessors were all dead, then the next one in the next generation is going to be alive. So some kind of birth from nothing. Point is, if we plot this rule one, there is a way of classifying these rules and giving them names, if we run this rule one over 20 or 100 or 1000 or whatever, how many steps, you can see that a regular pattern occurs. So we can give a formula for this rule. In this case, it's very simple because only if, well, only in one case do we basically flip the values of the previous cells, except for these two cells in the middle, next and right to the center, they always stay zero. So point is, there is an iterative formula, and it takes into account the value of a previous generation. That's why it's iterative. But we can also give a closed formula, one that does not rely on the previous generation. We can distinguish between even and odd generations and just state which ones are alive and which ones are dead. So that's fine. That's deterministic behavior. We can predict it for any number of generations, and we don't have to iterate. Now here is another rule, rule 13. Here is how this one looks after 20 steps. And here is how it looks after 100 steps. And in these two other illustrations, top right, you can see how a simple change in the initial conditions, how it spreads through the whole, basically the whole row. So this is actually chaotic behavior. And it's chaotic behavior in pure math. There is nothing going on with measurement. It's a property of the math. So we cannot predict the value of a cell in the i-th generation. We have to run the iteration. This is also called computationally irreducible. In fact, it's a very good secure random generator because the behavior is random. It has been proven to be random. It's all work by Stephen Wolfram, who has written this book, A New Kind of Science. It's interesting. Not everybody takes it too seriously, but this work on cellular automata is not disputed. So it's cool stuff. So chaos is a fundamental property of our universe. And we have to somehow, you know, if it occurs in our systems, because our systems are complex, as we'll see, this limits the range to which we can say something useful with models. Another example is emergence. You've all heard about how ant colonies exhibit these very sophisticated group behaviors, even though presumably the brains of these ants are rather small and simple. So what's going on here? Well, we have what's called agents. Each ant is an agent. It has simple rules it behaves based on, for example, based on pheromones and smells and stuff like that. But then we have lots of interacting of these agents. They interact in different ways. And then they produce some kind of complex ultimate outcome. And the point is that we cannot predict the outcome just by observing and understanding the rules. We have to actually run also iteratively the system and see what can potentially come out. And then the thing is that in lots of systems we care about, they work exactly this way, right? Politics sets rules. And then we all as society behave in some way. We interact based on rules, maybe not based on rules. People know there's lots of game theoretical personal optimization going on, right? And so we really can't predict, right? So I did an interview with somebody from Theo Delft, Igor Nikolich, about socio-technical systems. He investigates how society interacts with technical systems and what resulting behaviors could come out. And again, he can't really predict because of these immersions. But what he can do is he can run these agent models and then look at the scenarios that could potentially happen and then discuss this with stakeholders, potentially go back to change the rules and run again and maybe iterate to a better set of rules. There's no proof there, right? There's no guaranteed understanding. But it can help. He has lots of interesting cases there. There's also a very nice counter example from politics in Germany recently in Berlin. They were setting an upper limit for rents, right? So the idea was to therefore have lower rents and so less affluent people will find apartments. What actually happened, at least initially, I haven't checked in the last few weeks, is that 37% more apartments were sold instead of being on the market for rent because this just became less interesting for people, for the renters. 28% fewer apartments were on the market, 200% more inquiries per apartment. And there was a shadow rent for the case when the law wasn't complied with the German basic law. So the point is this failed, right? And it failed because one tried to set simple rules in an obviously complex system with lots of emergent behaviors. Now, I do think that one maybe could have anticipated that even without running complex agent-based simulations, but still, it's an example of how rulemaking in politics really is challenging because you just don't know how the system will react once you put some kind of stimulus on it by changing regulations. There is also an interesting example from cellular automata. Game of life, you've probably all heard it, right? It's basically a 2D cellular automaton where the question whether the cell marked with X in the next generation depends on what happens around it. So this phenomena like dying from loneliness, dying from overpopulation, healthy environments, reproduction, the rules don't really matter. So here is something. So this runs a game of life simulation for, I don't know how many oscillations here or how many runs. And you can see there are things oscillating back and forth. There are so-called spaceships. There are glider guns. These are the technical terms for these seemingly coordinated behaviors, right? So who would have thought that from this relatively simple set of very local rules seemingly coordinated phenomena can happen? Almost looks like ants walking around a track, right? So again, pure math can produce complex or coordinated seeming behavior from simple rules that do not somehow presuppose that kind of coordinate behavior. So what can we do here, right? We just said we cannot predict behavior of the whole system by examining the rules. But what we can do is we can set a new, we can define a new set of rules that specify that are valid for the system as a whole, like whatever. If you have more than 100 ants and the temperature is over 30 degrees, they'll come out of their bunk or whatever it's called and will start rummaging around, you know, stuff like that. You can observe or create higher level theories. In fact, this happens all the time, right? Because the behavior of ants might be based on biology, but biology is just applied chemistry and chemistry as we all know, is just the physics of the of the of the electrons, right? It's a joke among physicists. So but the point is we couldn't predict the behavior even of a single ant if we try to do that with physics, even though it is physics that drives the ant, everything is driven by physics, right? We could try with chemistry, but that's probably also a bit too low level. We can maybe be more successful with biology or medicine by doing an MRI or an fMRI of the brain of an ant, right? I mean, I'm not a biologist, as you can probably tell. But the point is what we're doing all the time, we're stacking abstractions on top of another of one another, we're building models of aggregated subsystems in order to understand those. There is a very nice quote by Ezger Dykstra, a famous computer scientist who says, the purpose of abstraction is not to be vague, but to create and but to create a new semantic level on which one can be absolutely precise, right? This also relies to this averaging stuff in the fusion reactor. So very, very cool kind of framework that humanity has built for themselves there. Another example of stuff that can go wrong in modeling, path dependence, right? So there is a complex structure of decisions we've made over time, I don't know, it's just a cool screenshot, right? But we are somewhere out there on the outer edges of this path. How did we get here? And the answer is we got here because there were all these random and context and resource dependent and cultural and political decisions that we've made over time. For lots of phenomena, if you ask, so why the hell is something like the way it is in our society? There really isn't a good answer. It could just as well be different. But because of all these random things over the decades and centuries, we've just ended up there. There isn't a good reason. It just happened, right? And the only way to predict, if you will, is by going the path you cannot simplify. You can't run an analysis and say, well, obviously, you know, I don't know, the society has to do things this way. It's not like that. Even the obvious example is evolution, right? Evolution is not a deterministic process. It's just because of all kinds of constraints and randomness and a meteor impacted the earth, stuff happened. All of these things are examples of complexity. There are other things like hidden links, feedback cycles, tipping points, power loss. If you listen closely, you have heard all of those in the discussion about climate, right? Which just drives home the point that our climate is a complex system, which makes it hard to control it in a deterministic way. And by control, I don't necessarily mean just geoengineering, right? It's really hard to understand and decide what to do. I mean, obviously, we have to reduce CO2. I'm not discussing that, but in detail. I guess maybe the point is that geoengineering is risky. So last point before we wrap up this talk, unknown unknowns. Models almost always represent somehow knowledge and experience. There's no completely unknown stuff in models because nobody could have put it in there, right? So there might be also biases and prejudices. And we won't ever run a sensitivity analysis for unknown unknowns because we don't know them. We're not aware that they're unknowns. So we're not testing the model for what it would mean if that unknown parameter would have a different value. We don't know it, right? So the really nice example, I forget where I have it from. It's very well known. I actually know where I have it from, but I forget that it's clear. Whatever, it doesn't matter. So, you know, Turkey. Turkey gets fed by the farmer. Turkey likes the farmer because, you know, it's the guy who brings food. So based on Bayesian reasoning, updating its own perception, the Turkey builds a model of the farmer in their brain and, you know, becomes more and more, you know, on the good side. But then on day 100, thanksgiving, the farmer comes and kills the Turkey. Of course, the model could not have predicted that, right? Because there was nothing in the experiential world of the Turkey. There was no indication based on which the Turkey could have updated the model in their brain to maybe not be so optimistic because, you know, one day, thanksgiving comes around. So again, so this means that when we think about what we will all die from, we as a humankind, right? We can do some probabilistic modeling that estimates the likelihood of dying from volcanic, super volcanic eruptions or meteors hitting or natural pandemics because we have experience from the past. But we cannot use this approach to predict, well, how likely it is we're going to die from our own nuclear wars or AI eating us or a man-made pandemic. I did an interview with Toby Orto wrote a book called The Precipice where he discusses these likelihoods as well. Interesting story. So was this crane accident here consequence of an unknown unknown? I don't think so because they knew how important it is that the hook doesn't break, right? So they put a safety factor of three, they told me, into the hook design. But if the process in manufacturing fails, well, what do you do, right? You can't, your model won't predict. So, all right, models, they're not perfect, but what else? There is a nice example that maybe gives a bit of a counterpoint to detailed modeling. There is something called a fast and frugal tree, which is a very simple decision tree, basically, to decide whether a patient who shows up in the emergency room should be moved to the intensive care unit or to a regular care unit. And as you can see, there is only basically very few three yes, no answers. We don't have to discuss what these mean in detail. This thing performs better than a machine learning model that takes dozens of factors into account. So the doctors would say something like Hammersche Ermossog macht, which is German for, well, we've always known that we don't need this stupid machine learning stuff, right? But the point is more data, more detailed modeling doesn't always mean better outcome, right? Sometimes common sense and just experience of humans is something that is relevant, especially also when interpreting models. And of course, models change over time, right? That's science. As we get new insights, we update how our models work from Newton to Einstein to a grand unified theory, maybe at some point. So models are everywhere. The use of models allows us to forecast developments and quantity for quantified uncertainties. That's cool. Models make influencing variables and their effects explicit, right? The explanatory models can even help understand something. Models are the basis for constructive discussion. This is really important. When you have a model, you can constructively disagree whether some additional parameter should be taken into account or whatever. You cannot do that if you just run around and claim shit, right? As some people do in this pandemic. But of course, boundary conditions and limits of modeling must be considered. And in the end, in case of doubt, crap in, crap out is always true here as well. All right. This brings me to the end of the talk. Interestingly, I took two minutes longer than in my trial. Sorry for running a little bit over time. But I'm done now. There is a book that you might want to check out. And of course, there are the podcast episodes listed here that are most relevant to the stuff we talked about. Obviously, 330 more. But these are the ones that have direct influence on this talk. Second, whether path dependence isn't just a lack of proper accounting of state, or if the person asking missed something there. Sure. I mean, it's similar to this problem with chaos, right? If you could track everything to arbitrary detail and could keep track of everything, then maybe path dependence would not be a thing. But we know that's not possible in practice. Yes. I think that's it from my standpoint right now. There aren't any more questions. I am aware of. So chat has any more. There is something more in the IRC. It doesn't look like it. Yeah, I think we can end here.
Models, as well as the explanations and predictions they produce, are on everyone's minds these days, due to the climate crisis and the Corona pandemic. But how do these models work? How do they relate to experiments and data? Why and how can we trust them and what are their limitations? As part of the omega tau podcast, I have asked these questions of dozens of scientists and engineers. Using examples from medicine, meteorology and climate science, experimental physics and engineering, this talk explains important properties of scientific models, as well as approaches to assess their relevance, correctness and limitations. For more than twelve years I have been interviewing scientists and engineers for my podcast omega tau. In many of the conversations, the pivotal importance of models for science and engineering becomes clear. Due to the pandemic and the climate crisis, the meaningfulness, correctness and reliability of models and their predictions is ever present in the media. And because most of us don't have a lot of experience with building and using models, all we can do is to "believe". This is unsatisfactory. I think that, in the same way as we must become media literate to cope with the flood of (fake) news, we must also acquire a certain degree of "model literacy": we should at least understand the basics how such models are developed, what they can do, and what their limitations are. With this talk my goal is to teach a degree of model literacy. I discuss validity ranges, analytical versus numerical models, degrees of precision, parametric abstraction, hierarchical integration of models, prediction versus explanation, validation and testing of models, parameter space exploration and sensitivity analysis, backcasting, black swans as well as agents and emergent behavior. The examples are taker from meteorology and climate science, from epidemiology, particle physics, fusion research and socio-technical systems, but also from engineering sciences, for example the control of airplanes or the or the construction of cranes.
10.5446/52042 (DOI)
Welcome back to the third day of Congress here at the Huck Stage with a very interesting sounding talk, Open Source as a model for global collaboration where Hong Fuk Dong will share successful examples on how open source can be a helpful tool or solution to global problems. And I think with that out of the way I can already give over to our speaker. Thank you Lenny for the introduction. I'm very happy to be here. My first Congress was 31C3. At that time I also gave a talk on stage about local production in fashion in the textile industry. I still remember how I was so impressed with the whole thing over with the ambient setting projects and the people at the CCC back then. And of course like many people I came back every year. I still can't believe that this year there is no face to face Congress but I'm still very glad that we have this virtual experience and I'm happy to be part of this. A little bit about myself. I was born and raised in Vietnam. What is my relationship with Open Source? I am a founder of ForceAsia. This is an Asia-based organization that develops open source software and hardware. We promote open source activities and try to grow the communities in the region. Because of my work and engagement in the open source community over 10 years I got elected to be the Vice President of the Open Source Initiative. This is a non-profit that safeguards the open source definition and maintains the open source license list. Recently I also joined the Open Source Business Alliance as a board member. This is a German non-profit that operates Europe's biggest network of companies developing, building and using open source software. What is different between now and the 31st, 31st century? Not so much. I'm pretty glad to say that I'm still doing the same thing. I forget for open source development, open source activities and build a community. This is what I do. Today I'm going to cover three topics. First I want to talk a little bit about the lesson learned and example how we build open source projects and communities and how open source is a model that can enable global collaboration. I want to touch briefly on two trends that bring negative effects to the open source ecosystem that I have seen the past few years and finally a small call to action. Lesson learned. It all started back in 2009 when my partner and I founded a forced-hation organization back in Vietnam at that time we realized the opportunities that open source and open technologies can bring to the people in the developing nation and country the opportunity to learn, to share and to develop your own solution. We see this as an opportunity and we want to bring this opportunity to spread the idea to more people, build communities so that people can make their own decision and build their own solution for themselves. Forced-hation is basically a network of people who share the same idea, the same belief in sharing and collaboration. Even though our name is forced-hation, we have members, contributors from outside of Asia as well, from Europe, from Australia, US and many countries around the world. What we are doing, we develop software and hardware projects like many other open source organizations out there. We run events to bring people together before it was face-to-face, even now it's on the virtual spaces. Another focus of us is on education. We teach people how to write code, how to contribute to different open source projects because we believe that in order to change something, in order to increase the adoption, everything needs to start with education. It's also our big focus. In 2009, we managed to sustain the operation. This is actually not so easy. As we started with two people as a grassroots organization, we boosted most of our activities ourselves, do not have any backing by cooperation or by government or American life funding organization, something that is to start everything on our own. It's always a big question for us how we can sustain the operation, the question about how to continue to grow and develop further. We constantly think of a model, how open source can generate an income financially that's ever to continue to build and work on the projects of our interests and that we are passionate about. We have different ways to sustain, for instance, we offer services around the software that we develop, we sell hardware and we also do consultancy. The scale of the organization, we have about 35,000 subscribers to our mailing list and social media, about 4,000 developers registered on GitHub. Throughout training and education programs, we onboard 2,000 young developers and students every year through coding programs. We organize a lot of face-to-face meetings. Like last year, now we do a lot of virtual events and also hackathons. We maintain technical blocks. Basically, this is a space for people to share technical knowledge. These are some of the software and hardware projects that we develop in the Foreshia community. Today, I would just emphasize, I want to introduce two projects as an example how open source work in a low-summon scale starting from somewhere in Asia. First of all, Event Year. The Event Year is a project that started in 2015. This is an open source event management system. We organize events since 2009 and it's always challenging for us to see what kind of tooling to use for code for papers, what to use for scheduling, for a fatigue thing. We always have to use multiple tools. In the very beginning, we use Google Form just to collect submission. We also want to realize that there should be an open source solution that helps organizers to run events. It's really important to do events just like the Congress where people can get together, can share the idea. That was the original goal to have an open source event management system. We started to build this Event Year and now it becomes fully functional. It's called the call for papers scheduling ticketing. We also recently integrated video conferencing, the solution entirely open source. We also work with other open source projects. For instance, we integrated GCC, Big Blue Button. Of course, there are also bridges to other video solutions. This is something that we see as an alternative to proprietary software like Eventbrite. Another project that I forgot to mention, we have about 100 contributors since 2015 contribute to Event Year. They are not only coming from Vietnam into the part where we are in, but we also got developer from Europe and the system being used by organizations in the US as well. Now we can continue to collaborate with more projects to develop further Event Year. Pocket Sign Lab, this is another example. We previously, I think the past few years, we have our assembly at the GCC and we also run workshop on the Pocket Sign Lab. This is the open source hardware device for education that built as a teacher student project starting from India, but now it has become a consumer product. We distributed in many different continents including Europe, the US and the collaboration here. We came to the GCC some years ago and we got feedback from the community how to change the design, the blueprint of the hardware. We also collaborate with a European level project, the Horizon 2020 on Pocket Sign Lab, work together with Frauhofer Institute here in Germany on the production of the hardware. I want to talk a little bit about something that we learned over the years of developing projects and also building the community. I often get the question from people, so how the whole thing gets started, how do you get people to contribute, how do you come up with a good project to work with. So back in 2009, POS Asia started out as a conference, an event where people meet and exchange ideas. When people come together, they start to develop a project, they start to work together, but it's really important in the beginning of building a community, we need to understand the landscape. So people around you, what kind of technology they are familiar with, when you introduce something, you need to be sure that the people in the community are excited about the ideas of the project. And as you see on the picture, the people around us at that time are very young people, they just getting out from university and also did not explore a lot to the whole global open source movement. So we try to promote a contribution apart from coding, so there are a lot of things that people can contribute, for instance, doing design, writing articles, promoting projects or organizing events, doing fundraising and many more. And we realized that to promote contribution apart from technical can help to widen, to attract new joiners and one thing is still valid. It's valid until today, which is try to keep the entry barrier low. If you contribute to various projects, open source projects, you can see in order to set it up on your local server, so it's the same as somebody before they start to contribute, they need somehow to install it on their local machine and it's always different experience for different projects, not something like out of the box that can easily be done by a beginner. So for us, the question is how to keep the entry barrier very low, how to get people that setting up inside it before they can actually contribute. Another lesson that we learn is to understand of the motivation of developers. So if you want to attract people who write code, which is like the core thing of the project, right, so you need to do to aware of their motivations. A lot of people from Asia community are motivated by opportunities to get higher in the future opportunity to travel outside of the country, which is very difficult for many citizens in that particular region. And of course, they are motivated to work on tooling that they are familiar with. So we understand this, understand the motivation and what we try to offer to our contributors is something that's much and that can satisfy their wish. In 2012, we tried to reach out to more international communities and invite developers, we go from the West to come and connect with our people to share the knowledge and at the same time we look for opportunities to bring the contributors like overseas where they can get exposed to more global environment. Another thing that we learn over the year. There's so many open source projects out there, right? It's not that one day you develop something to put it online and then they will, they will attract like attention from, from the community. It's really difficult these days to get to onboard new developers or to get people actually engaged and contribute to your projects. And we learned that as a developers, as a, as a, as a coach writer, people, people like to improve their skill. So we organize something like coding contest has been going on for the past four years already. So we do this throughout the year, try to, to, to, to help people at first to learn how to code and then seeing the, as they get a thing and the better they also win the prize for contributing to, to our projects. And it's, we find it's really useful not only to, to attract new contributors, but at the same time you, you need to widen the pool of contributors. It is the first step to, to guide people, to, to, to, to show people how to contribute to not only our project, but also open source in general. Develop a retention. So these, so I, I, I just want to emphasize there's a lot of things that happening in the, in the last 10 years, I'm not be able to, to share every details, but I hope that to, to summarize a few highlights in this presentation. Retention is a big question for many projects, not only our cell, when you build an open source project, you, like, you need to wear and understand that at one point people will move on. So people will need to go on with their life. They find something a little more interesting. It difficult to keep, like, people like, continue to engage over the years. So therefore it's important to always reach out to more people in the community, try to, to, to, to, to engage and change new commerce at the same time, you should not like, put the knowledge into like one core person is always important to make sure that you have a backup on whatever you are doing, like, introduce, like peer review to ensure that there are more people can review the course and of course minimum to maintainer so that you can, you don't have to rely on one person over the time. Delegate tasks, this is something that we find very useful when people join the project. People like to have more responsibilities. It also motivated them. This is quite interesting fighting. So people not only motivated by, by financial benefit or by traveling, but some people too motivated by the responsibilities that they have. So we introduce like mental roles where people can, can have the, the younger developers or newcomer to, to, to, to get involved. And this is something that can motivate and keep people engaged in the project. Yeah. And we also introduce development practices. This is something that is not new to many like projects out there. The question is how can these practices being enforced and implemented in, in your development. This is the question. So a few things that we got out from, from our development practices, which is always much one issue to one pull request. They sound very simple, but a lot of people don't do it. Like big issue into multiples, more issues, it's also easier for people to review. It doesn't have required so much effort from, from the reviewer. Test before making a pull request. So of course this is a standard way, but a lot of people still to make a PR without testing before that they make a PR with the, make things like more difficult if you merge and then something goes wrong, you have to, to reverse the change. Really change that you stay on the PR. You set the PR on one thing, but actually there are a lot of different core chain into one PR, which is not welcome or encouraged. Have each other review each other pull request. This is like a peer review practice that we always encourage document while coding. So document is, is not a favorite thing to do for developer. But we always encourage our contributors to, to, to document why they are coding. So the next person can understand and follow up with them with the progress. Earned right access. So basically after contribute to the project for some time you'll be able to earn the right access to the repository. And one thing that very important is to avoid private conversation and only collaborate with the community with the, on the project level chat. So we give the, for our chat and every single project have their own project channel. So instead of two developer talking like on private about how to fix an issue, we encourage people to, to, to, to have their conversation on, on the, on the public channel. Yeah, and, yeah, and how can you make sure that people really follow the practices? So it's about encourage people to remind each other is a practice that the, the developers continuously helping each other and being a place for being, being appreciate for, for, for follow the practice. So again, so open source is a decentralized software development model that encourage open collaboration has proven how collaboration could work successfully on the global scale. Of course, the project that we developed is on a very small scale compared to Wikipedia or compared to the Linux kernel, but you could imagine starting from a project somewhere in Croatia is now being used by many other countries at the same time heavy contributors from everywhere. So if we can achieve a local collaboration with this project, imagine how much impact it could have if open source be done on the national level or on government level. So yeah, so I just want to give one quick example here. The current Corona virus pandemic race, a lot of digital solution have been developed everywhere around the world. This is an example of the digital contact tracing application developed in Southeast Asia as you can see here. We only have 10 countries in, in, in, in Asia and in Southeast Asia region and these six countries, they all like develop their own solution. Yeah. And it's all check code similar problem. They all see that they all want to have a digital contact tracing app, but different countries set is their own application. Even though it could be possible like to share and call her back in some way, but it did not happen. So I don't know how much it costs for, for this country to develop the solution, but I read somewhere online the Corona fund app developed in Germany that's cost over 20, 20 million euros. So you imagine if each country spend this much money to develop similar solution why there is no collaboration across nation so that we can save the resources at the same time speed up the whole process. Corona pandemic is only one of the challenges that we are facing these days. Climate change, political war, so many issues. Open source collaboration could be a solution to many problems, but we need more examples. We need to face a several example to accelerate the whole open model in all industry. Open source should not be only about software. It could be the open model could apply for, for hardware, for pharmaceutical formula could apply for processes. And it should be open source open center or should be a default for, for all different industries and encourage more collaboration across borders. Moving on, I want to talk a little bit on the trends that bring negative effects to open source ecosystem that I have observed in the past year. First of all, I want to talk about digital shaming. So digital shaming is being electronically attacked online. It can literally destroy people's lives financially and emotionally. And sadly, there is an increasing number of open source contributors or anyone could be victims of this digital shaming. Have you ever participated in a digital shaming act? For example, if you unconsciously like a treat or retreat something that you read online. So I don't know if you anyone remember a treat that happened in Picon some years ago where some people put, talk about a conversation, a private conversation of two male developers in a sexual way, consider it a sexual way. And then these people got fired for, for, for the act. And this is just one of the example. If you see something that's happening online and you see so many people like the treat and you think that you should also support this activity by live or retreat. So think about it before you do something. If you are not, if you do not know the person, do not, and do not have like the understanding the whole situation, do not be part of digital shaming. One thing I think that important to understand is the self-serving bias. Self-serving bias is an action done only for one's own benefit, sometimes at the expense of the orders. It happens every day in our life. For example, a few days ago, I forgot to call a doctor to change my doctor appointment. And then when I was asked by my partner if I've done this because I don't want myself to look bad, I just say that, oh, I called the, the doctor office, but nobody answered the call. Yeah. So we tend to, to be always biased on our side, on ourself, right? So whatever information that you see, people claim online, you need to, to see that people often talk on their, on their own perspective. Yeah. And, and of course I can be a very fair person, but I will always try to protect myself. So and a lot of activities like this happen on the internet. And many, I see that, I see many older generation of contributors are leaving the community because of this digital shaming, because something that, that they might have miss spoken in the public and then being criticized so much by the public and then forced them to leave the community. And it's really a, it's an unhealthy environment for people who, who contribute and involve in open source community. And there's also something that we should be aware of. There are a lot of people out there use vulnerable act for public interest because they, they think that if you, if you, um, stress yourself as a victim, you, you could get their attention and the support from the public. It's also good to, to build up your profile and interest. So this is, um, it's self, uh, like unreasonable, unreasonable, but it's still a practice that happened on online, on, on the internet. So it's important for us to be aware and do not be part of, of this whole thing. If we do not support the support initiative, if you do not know, um, the people who involve or do not have a clear understanding of the real story. Another thing that I, another train that I also want to highlight here, diversity and inclusive inclusion. I'm really glad that there's so many, um, um, so many, uh, initiative and so many effort, uh, uh, in our society to push diversity and inclusion these days. Uh, by definition, diversity prefers to the trace and the characteristic that make people unique why inclusion refers to the behaviors and social norms that ensure people feel welcome. Yeah. And, uh, as you can see, um, uh, there's so many cooperation, big company, uh, now embrace, um, diversity, inclusion by, uh, saying that they wanted to, to get more women into leadership position, um, they want to develop a more inclusive recruitment process where, um, uh, discrimination in, uh, recruitment could be limited and there's also many more. Yeah. So you can see there's more agency now being formed, uh, to advise on diversity, inclusion. There's more jobs created for, for people who research and, uh, who want to, um, develop further, uh, uh, in that, in that field. So I'm also very happy because myself as a minority, um, uh, I'm a, I have a diverse background. I'm a woman at the same time coming from Asia. So this whole be it's like for many years during my, my career, um, uh, life. I also experienced discrimination and this is a really great thing for, for, for people like, um, like myself and also a great thing, uh, towards a more, uh, equal society. However, there, there are some side effects that I want, I want to emphasize here on this. Um, there's different initiative, right? That, uh, that we should support, but there are also something that create more, um, confusing for, for, for people, especially people coming from, uh, a non-English native speakers. Yeah. Uh, have you ever experienced that you come to a meeting room, uh, with, uh, with your colleagues or people you work with and you afraid of, um, ringing up, uh, afraid of saying something because you're not sure, afraid of addressing a person because you're not sure what kind of role now you should, um, use to address that person. Yeah. And there's a lot of rules about the way how you should speak in public. Yeah. Of course. Um, if I, um, the, the advantages for being a non-native speaker, I can always say that I'm not aware of all the, um, uh, uh, implication in the languages, but it's really difficult for people like for white people, um, who consider as a native speaker. So they now started to, to be worried about what is the right thing that they allowed to, uh, to say or not. It will there be like, um, offend somebody in public. And I heard it's a lot in, in the communities these days. So I just want us to see that diversity inclusion is a good thing. We definitely need to support it, but we also need to be aware that, uh, white people, white men is also part of the community, part of the diversity, uh, and inclusion. Uh, it should be about everyone. So we should not exclude or make people, I feel uncomfortable, um, about like coming up with new initiative that only applicable for people, um, in particular, uh, countries. Yeah. And, um, again, so I'm worried about, um, is it the whole diversity and inclusion, the side effect is our freedom of speech being limited because of too many rules of life these days. And do people really like can freely, um, give their feedback, uh, criticism, um, actually is not always bad. So it's happy people to improve and become better. I want to give another example that has happened to me recently. Um, I was as an open source event, uh, where there was a group of people talking, presenting about, uh, policy on a European level. So this is a group that advise the commission, um, on, on policy and they using, uh, um, uh, a close or software like PowerPorn, PowerPorn to present. And there was a, um, a message in the community saying that, uh, okay, so if we, we advise the, the commission on using open source, we don't wish, don't wish, should, shouldn't we use open source ourselves. And this person gave the comment being attached so much by, uh, um, on the chat saying that, okay, so this is an act of, um, um, excluding the people, right? So even though he just like say the truth that if you, uh, work on open source, don't you, as soon as you use open source yourself and he's being like called out by many people saying that, oh, this is a bad act because you should not, uh, not really decide people, should allow people to, to, to, to participate. So, so for me, it's like, I do not think it's, it's a big thing, but, but I see that, um, people have different opinion and then that person did not like want to, to come back anymore just by saying a fact and then he's like stating the fact being, uh, really decide as not support, uh, inclusion practice in, in the community. So, um, this is something that we, we should be all be aware and we should all be aware and think about like support the right initiative and, uh, be, um, more open, like put yourself into a, like people should write, not everyone have the same understanding. When people like, um, say the fact, it doesn't mean that they try to, to retest side and the other person, um, call to action. Uh, each and every one of us can have, can offer support with, uh, we are capable of there's so many things that we can do to, um, to become a good, a good citizen. Uh, first of all, use open source software. As I mentioned that example earlier, yeah, so, um, my grandmother, my, my mother, yeah, so, um, they have never spoke to open source before. I could understand it's difficult to get these people, but if you have working as an organization that advice government on open source level, please use open source. If you got funding from the government to develop open source project, use open source, uh, products yourself. There's so many alternatives. Instead of zoom, you treat the big blue button instead of event pride, you even gain theta of Google cloud, you next cloud, there's so many options out there that you, that you can, that you can use. Um, just by using open source, uh, software, you really like have a, have a, put a support on, on the ecosystem, uh, contribute to open source projects. Uh, there's so many ways. So if writing code contribute to documentation, globalize, uh, uh, global, uh, classification, including localization, so change place project into different languages, organize, um, virtual events and try to, uh, bring people together. Um, there are also a lot of like design works and, um, and also make donations to open source project, talk about different open source projects. There's something that, that anyone can do. If you are a developer, release your work open source, uh, an example earlier about the tracing app, right? So imagine how much money we could save just by, uh, sharing develop, like, uh, together. Um, uh, if you have one, uh, problem. Yeah. So why, why do we, we, like develop 10, uh, 20 different solutions to tackle one, uh, problems. Yeah. So we can also work together. Uh, advocate for open source model in your organization. So open source is not only about software, it's about open collaboration. It's about sharing the knowledge. Yeah. Bring the knowledge to, uh, to more people, uh, advocate for, uh, more, this model inside your organization in company and, uh, in, uh, in your government. Uh, there's so much that you can do here if you live in Europe. So in Asia, it's so difficult. We never have a, like the opportunity to talk directly with our politicians, but you have the chance here. So, um, uh, do that and make sure that, um, there's a support for, for open source development. I, I know that there is a new open source strategy that being, um, introduced by the commission for, um, 20 and 23. So you can check out, um, that as well, uh, support open source, uh, small and medium enterprise here. So if you could, um, give a contract or could hire someone to work, so why not, uh, work with small and medium companies. So we don't want open source is a few of, um, it a better few for cooperation, multi-national companies, um, anymore. We want more, um, uh, companies to come and, um, uh, and also, uh, be in part of the ecosystem. And finally, uh, bring open source in the education. So in, in the, there's several ways that you can do that you can make education possible for instance, I, um, I teach my mom how to use, um, Ubuntu or, or labor office. So education could suck in your, in your home and then it could be late in school, work with together with teacher, uh, university, do education, uh, coding program. Like, um, like what we, um, well, like what we do at the first Asia, but, uh, there's so many small things, um, that you, that you could do like connect with people around you, educate people around you, your friend, your family members. Um, finally, um, take an active role. Everyone, uh, anyone can make a difference. We just need to, um, to do it. Um, yeah, I would like to take the chance to invite you as well to our summit, uh, in some, um, in March between 13 to 21st, 2000, March, 2021, uh, to connect and collaborate with open source community in Asia. This is going to be, uh, a virtual event as well. Um, below is my email. Um, I'm happy, um, to stay in touch and if you have any, uh, questions about our, um, projects or about this presentation, please feel free to, to contact me. Thank you. All right. So, um, I have four questions, uh, as of current standing. And I think I'm just going to go with the first one. Do you know of solutions to avoid information hierarchies in essence, single people knowing crucial information? Um, so I don't know a solution like to avoid, uh, hierarchy, but, uh, I could say that, uh, the open source, uh, development model. So when the, the circles and the process open, openly available to, to everyone. So this is a way to avoid like information in terms of hierarchy. Yeah. So if you do open source, uh, like in, in the open way, so you documented your work, you, the, the, how the infrastructure, um, uh, developed. So everything is openly documented so everyone can have the access. So it's the same with, with, with many open source projects out there. If you look up on GitHub, there's no secret on, on our repository, right? The way we develop and how the infrastructure set up, what is the blueprint for our hardware, everything is publicly available to, to everyone. So I would say that the open source model is a way to limit, um, hierarchy to, uh, information. Okay. Thank you. Uh, we have a question about inclusion. Uh, how is it, is there a easy way to break the language barrier in an international community? Uh, many people want to contribute, but not everyone does speak English well enough to this talk, discuss technical or other issues deeply. Yes. So, uh, the language barriers, uh, it has been a topic like for so, so many years. So, uh, what do you think that what could be a suitable solution for this? So there's a, like translation, um, application out there, but of course this is a barrier that is, is always there, the language barrier, right? However, uh, on the good side, more and more people getting, uh, um, uh, like being trained on, on English. So, so, so, so you can see this there even like in developing country, like, uh, Vietnam, Cambodia, Laos, more and more people started to learn and speak, uh, English, um, uh, very, uh, very good and fluently. So it's about like, um, like the need, we need time like for, for people to get into, um, like to learn the language and at the same time to contribute. There's not an easy solution to break the language barrier. There's always like some project they, um, they do, they, they translate to make sure that, uh, they, they translate the, uh, the contributor, like the contributor guidelines. So to show people, um, how to contribute, um, in the, in different languages, but at the same time, you know, it's not a, a, a, a, uh, uh, uh, solution because, uh, in order to write a code, most of, uh, uh, uh, the same tech, you know, and, and, and the world said written in English people, uh, need to, to, to understand, uh, and also have a certain level. And the other thing is that, um, people getting better and better. So there's so many, uh, way, uh, that people can, can have to learn, um, the language. So I think that in the next few years, uh, hopefully there won't be a language barrier anymore as everyone could be able to speak and write in English. Okay. And do you have plans on expanding outside of Asia or with post Asia, maybe to Africa or other continents? Yes. So, um, uh, for, for Asia, right? As I mentioned, we base out as Asia, but the projects that we do, it actually not only focus on Asia. So we have, um, um, partner here in Europe. So we work together with, um, uh, um, uh, the, um, uh, European Union. We work together with the Sproul Hopper Institute. We are part of the, um, open next, um, uh, program. So there, there, there is already existing collaboration. So we not focus only on the Asian market because open source is a gross border. So anyone can do, anyone can contribute. Um, Africa is also, um, a very good, um, uh, question here. So we connected with, um, Africa for us. So, um, uh, it's again, so it's not on any, there's no, nothing Congress that I could share at the moment, but there's also initiative, um, and, uh, and user group that active in, in Africa. And we also like connect with them, um, at Congress. I don't know if anyone from, uh, from Africa is here at the Congress, but, um, at the, um, open source initiative at Foss Asia, uh, summit, we do have people coming and it changed and connect with us. Yeah. Cool. And the last question for today would be, uh, how much of Foss Asia's interest or activities, uh, promote contributing to existing popular Foss software projects? Uh, could you, uh, repeat that question? Places quite long. Uh, how much of the Foss Asia's interest is into promoting contribution to existing open source free software projects? Uh, yeah. So actually we promote not only our object, but we actually do promote a lot of all those open source projects. As I mentioned, we, um, we use TCC from the start. Yeah. So the entire, um, uh, pandemic we, we try to, to use open source solution as much as possible. We set up our own next cloud instance in the project. We use a liberal office for, for several years and we use scheme in-scape. So we promote also in our training like at university and, and the school that we work with, we offer training on open source solution to, to people. So not only, we do not only promote our own, uh, Foss Asia project, but we actively promote all the projects. And I believe that there's also like, um, uh, some projects, um, that I know in the community getting, uh, contributed from the Foss Asia community. So which is something that we are very class in order to survive. Um, yeah, and in order to roll the ecosystem, it's not only about your organization, it's about the collaboration. You need to work with other, um, organization and you need to, uh, foster collaboration in order to roll and, and, and foster the entire network. So we are very open, uh, to work with other projects as well. As I said, we integrated TCC already in event yay and, and also big blue button. Yeah. And, um, yeah. So there are many more examples. I think that really nicely concludes this great talk. And you left your details in the slides. So if anyone still has questions, as she said, write her an email. And I think from our side, this talk is finished. And if you have anything more to say, say so. And apart from that, we're done. Yeah. So, um, I just want to say thank you very much again for having me. And thank you Lenny and Marcus for setting up the whole thing. I really appreciate it. Ciao. Ciao.
We need open source now, more than ever. The world is in crisis: Corona pandemic, recent flooding disaster in South Asia, climate change, political threats, inequality. Open source could be a solution to many problems of our time because only by working together we can make bigger strides in solving some of the most critical global issues. People from around the world work together on open source projects. They show every day how a fruitful and successful collaboration on a global scale is possible despite different views, personal and historical backgrounds and experiences. In this session, Hong Phuc Dang will share successful examples from communities to governments, at the same time outline challenges and how each and everyone of us can play a role in sustaining the open source ecosystem and the world.
10.5446/52043 (DOI)
So, hello everybody, welcome to the X-Hine stage first talk on day three. We are really happy to have Alistair and Aura's here. They are from the Fenkoko group, which is an interdisciplinary group that is researching stuff and trying to, on one side, work on their own, but also get together, discuss debunk ideas and help each other with their projects. The Fenkoko group was founded in 2013 by Aura's, if I remember correctly. And they are researching a wide range of interesting and relevant topics. And they are, up to now, non-funded and completely autonomous, which brings us also to the subject of our talk, because those two are going to talk about self-driving cars. And I'm really interested in what you have to tell, and the stage is yours. Thank you. I just needed to unmute myself. Thank you so much for this kind introduction. And before we get anything wrong out there, it was a bit of a misunderstanding. Alistair's not part of Fenkoko. Maybe I can convince him to be part of Fenkoko one time. But he's taking part, that's fine. So, we're going to talk about, let us go in media space, we're going to talk about Big Tech's $100 billion delusions with self-driving cars. And we are talking about it as a delusion and hoping to create an interesting talk with all, touching all kinds of topics. But we're going to end up with a question of whether the car in itself shouldn't be abolished. And, of course, what interesting mobility concepts would follow if we try to do an encompassing futuristic mobility? That's good for everybody. Thank you. Let's start with Ian of Mask, who I'm sure you all know. Here he is, looking extremely serious. He's a very serious guy, as you know. This was him in 2019. I think we'll be feature-complete on the full self-driving this year, meaning the car will be able to find you in a parking lot. Pick you up, take you all the way to your destination without any intervention. And just to make sure you know how serious he is, he says, I am certain of that. That is not a question mark. So let's see how they're getting on. They actually released autopilot a year later, a bit later than they planned. And it's now in beta. Here we go. Okay. So I'm going to let it go out. Oh, Jesus. Oh, my God. Yeah. That's a good example. A good example of this is still beta. That's how it works. And how important it is to have control at all times, because it just steered directly into the back of this parked car. And it wasn't going to break. So it's still detecting those rings on the road as a... Geez. Yeah. Oh, my gosh. This is why we don't have people with us normally. Okay. So there might be a few problems there. And this is really what our talk is about. Over the last few years, there have been hundreds of billions of dollars spent in this field. And we got that from a survey in 2017 where it said 80 billion. And we know that there was quite a few dozen of billions more than that being spent in the last few years where it's really peaked. And that includes startups. It includes the big tech companies, as you know, and automotive vehicles, plus a whole network of other suppliers and consultants. And we call this, for the sake of convenience, the technology mobility complex. And they're all convinced and trying to convince us that self-driving cars are just around the corner. So how are they getting along? Well, we've seen Tesla, and certainly they have something to market. It is in beta. It costs $7,000 or somewhere around that. And there have been three fatal crashes so far. It's very limited compared to what they said it would be. You've got Audi, for example. We've also tried to bring something to market. This is their A8, and it has jam pilots. They convinced regulators in Germany that it was perfectly safe to read your emails whilst driving with jam pilot on. Unfortunately, they didn't quite convince any other regulator in any other country. And as a result, it's been withdrawn from the market this year, basically because if there was an accident, it would effectively be Audi's fault. Then you have, of course, the tech companies. The big tech companies. Uber has been very prominent in this field, looking for mobility as a service. That's a concept we'll come back to. They started in 2015. They've invested over a billion dollars. But unfortunately, Uber have had some issues too, and we'll talk a bit more about it later on. They started in 2018. In November this year, just a few weeks ago, they announced they were giving up altogether. They sold, or we should say, paid to give away their self-driving project to another company. Amazon got into the field with this startup, Zooks, very recently, a little late to the game. Zooks has gone for the all completely autonomous new build design. This was released a few weeks ago. However, they were a little bit coy about when this will actually be working on the streets. All they would say is it's definitely not going to be 2021. Our view is it's probably going to be a bit later than that. And then you have the old automakers also involved. Most prominence amongst those is General Motors, who bought up crews. They've been investing in this since 2013. They claimed they'd have a commercial taxi, a robo-taxi service available by the end of 2019. Well, that hasn't quite materialized. If you go to San Francisco, you certainly see the crews' cars all around. They've recently announced, and they're very happy about this, they recently announced they have driverless vehicles for the first time on five streets in San Francisco. They will only be working in low traffic and at night. And there will be one person in the car with an emergency stop button. So a little far away from a commercial service. And then of course you have the big player, Always, which is Waymo, which is of course owned by Google. They've been around since 2009 with a number of different iterations. You can see the first version they have here, the Firefly. But someone must have been very rude about that to somebody at Google, because these are the latest versions you've got here, which are obviously really, really mean jaguars. Anyway, they've been going in Phoenix for a long time since 2017. They announced in 2018 that they'd have a fully driverless taxi service. Then they announced it again in 2019. It's taking a little bit longer. They finally announced it again in 2020, and they actually kind of do have driverless taxes in Phoenix, but it's limited to a 50 square mile area. They're actually supervised remotely. So somebody can at least intervene from afar, a safety driver effectively remotely dialing in. And they're also, they need perfect weather. So again, very limited. And it's always in where you see the successes in this area. It's always in these highly controlled environments. Phoenix is an ideal place with perfect weather, easy streets, no hills. And then you've got other projects like this, the Ford Argo. Again, a lot of investment, two billion from Ford, even further investment from Volkswagen. And they're delivering fruit and veg in Miami, which is very nice. Again in this very small area, but that requires two safety drivers. And then you get this, which is a very successful project. The Optimus Ride project, so that's in a retirement community. And those are the kinds of places where self-driving cars seem to be working, not so much on busy urban streets. And you could go further than that. And this is a quote from Dr. Jill Pratt, head of Toyota Research. She says, I would challenge anyone in the automated driving field to give a rational basis for when level five will be available. And we'll talk a bit about level five, but that means a fully autonomous vehicle, which as we would expect. And you know, here's John Kraftchick, the CEO of Waymo in a candid moment a couple of years ago. He conceded, you know what, self-driving car technology is actually really, really hard. Who could have possibly thought that? And I'm going to hand over to Oris to explain a little bit more about why that is. So we're going to talk about first, why do we call it a self-driving car, if I may? So to answer it, we need to look at language and technology. And in general, it is interesting to note that words gain meaning through their use. And they can, if you want to say that, lose meaning, but they can surely change meaning and get ambiguous by the wide acceptance of more than one meaning in society. And autonomous has by now not only one anymore, if we want to say the least. So if you look at a definition of an autonomous car, hang on, stay with me. If we look at the definition of an autonomous car, you get the obligatory definition of a vehicle capable of sensing its environment and operating without a human environment. A human passenger is not required to take control of the vehicle at any time, nor is a human passenger required to be present in the vehicle at all. And an autonomous car can go anywhere the traditional car goes and does everything that an experienced human driver does. Now that is a high aim. Would you? Yeah. So we land in impractical ambiguities with this. Not only because the traditional definition of autonomy is something completely different. I'd like to share with you just a very short version of what we read when we type it into the Stanford Encyclopedia of Philosophy, which is a nice source. And it gives you individual autonomy is an idea that is generally understood and referred to, to the capacity to one's own person, to one's life according to reasons and motives taken as one's own and not as the product of manipulative or distorting external forces. Now this is not a new point. This point about autonomy has been made quite a number of times, especially in terms of autonomous cars, but it is important to mention nonetheless. And we will see how difficult and how ambiguous it is when we go to the automation levels that are utterly defined by automation, sorry, the automation levels that are utterly defined by autonomy. And autonomy is supposed to be the more complex concept. And this is supposed to, you know, seems a bit the other way around. So the Society of the Automotive Engineers, oh, by the way, insurers have, have identified autonomous ambiguity as a potential reason for an increase in crashes due to confusion. So that's not, it's not only a point of philosophical extent. So the Society of Automotives, yeah, that's it. Thank you. The Society of Automotive Engineers, the SAE currently finds six levels starting by zero, which is why we ended five, obviously. And these levels go from fully manual to fully autonomous. And these levels have been adopted by the US Department of Transportation. Now there is a standard out there that I'm gonna come to and look at the wording that they use to be sure what we're talking about. But in general, it's safe to say level one assumes that a system can assist the driver with one driving task, just one. So ACC fits into this category, which is the adaptive cruise control, ACC. And level two systems such as pilot assist can assist with, for example, two tasks. And level two is the highest level of automation currently available. That will lead to a discussion on strategy and development that the one is trying to erase the driver and the one is trying to kind of see through the driver and learn from the driver. The first one is trying to skip level three and the second one is kind of putting great emphasis on it. But to understand where that lands with all the assistance systems and the wording, we need a map it somehow, which is why I looked at the taxonomy and the definitions for related terms to driving automation and systems for on-road motor vehicles, which is the long as name for basically the standard J3016201806, which is the standard which is where it's all defined. And this standard gives you roughly this explanation. And me as a philosopher, I need some proper words. And I do, as I said, am utterly aware of the fact that we can use in different contexts words differently due to different reasons, but we should also make sure that we do understand each other and the context in which we use those words. And we should be clear on at least our own extent of meaning when we use the words. And then especially with if it's used by others. Sorry, I've got a little scrolling problem here. Okay, so this document that I was talking about refers to three primary actors in driving the human user, the driving automation system, and interestingly, other vehicle systems and components. And these other vehicle systems and components or the vehicle in general terms, sorry, I'm having serious trouble scrolling here. So it boils down to processing modules and operating code that overlap in the automation system and the subsystems that are supposed to somewhat act or primarily actor wise, can we go back one? Primarily actor wise are supposed to be distinguished, you know, I'm going to be through with this in a second. But just so you know, these automation levels are defined by the role of those primary actors and how they act in traffic. So they're trying to match the automation levels onto the dynamic driving task performance and the DDT, so dynamic driving task DDT, fallbacks, which is usually the driver, especially in the systems that we're talking about nowadays. But this is supposed to be done by the system completely, which we're going to talk about a little bit later. So for example, a driver is, oh yeah, and it's necessary to see that it's about the way that the system is designed, not necessarily the actual performance of a given primary actor, for example, a driver, for example, who fails to monitor the roadway during engagement of a level one adaptive cruise control, ACC system, still has the role of a driver, even though he or she is neglecting it, which is basically the most easy example that you can pick and all the others bring you into actual trouble. This one seems clear, but the others really don't. Okay, so we're talking problematics in decision making and predicting and responsibilities. So these levels apply to the driving automation. You can see what I've just been talking about here. It's a little bit moved, but I've tried to repeat the definitions up there. You can see that it matches roughly to the scale. You have the system, you have the human driver, and you have the other system components which end up being some driving modes. But some driving modes doesn't give you too much information, obviously. So while we're trying to get informed about the extent of assistant systems and how responsible they are, we get that and end up being really confused, and that's a bit of a pity. And we can see this. We can move on to the other way. We are already. So some of those subsystems, even in the definition, are explicitly excluded from the taxonomy that is supposed to describe the automation. So we have some automated subsystems explicitly excluded from the automation taxonomy. And to understand what that means is only possible if we look at what the heck we're talking about. So all these ADA systems that we have to understand language-wise. And these features basically boil down to perception. So we have, this is the, every autonomous car roughly has, and you can always argue with the wording, and you can always argue with the nuances, but roughly you have a perception system and a decision-making system, and then you have actuators of those decisions. But they consist roughly of, depending on whether you use a Tesla or not, because Musk doesn't believe in lighter, and for a good reason that I'll come back to later. But the idea is you have surround views, surround view, cross-traffic alert, park assist, emergency braking, traffic sign recognition, lane departure warning, adaptive cruise control, collision avoidance, rear collision warnings, surround view, park assist, and all these kind of things. And they amount to different functions with different extents of automation, right? Lane keeping has more automation than the, well, I'm just not, I'm not even gonna go there, because that's what is so difficult about it. So what's interesting and necessary to understand in general is that we have perception, and perception is made up of computer vision and sensor fusion, and it's all about understanding the environment. Computer vision uses cameras, and it allows to identify cars and pedestrian roads, and sensor fusion uses and merges data from the sensors, such as a radar or a lighter, or the complement data from the cameras, or infrared when it's close to the car, or all kinds of things, depending on the project we're talking about. But decision-making is on the prediction and decision side, and it's not as perception doesn't seem to be yet, even though impressive. It's not developed sufficiently to just roll it out as they're climbing. And that's what I'm trying to get here, while showing that it is interesting to look at those things. Now, all these things, we can come back to into the discussion. I've got to move on. Yeah, yeah. So now we're going to talk about automated driver assistance systems. That's what it is, and the interaction with the driver. And there are two slides on this, because, and this goes back to what I mentioned in terms of the turn-in strategy. The first, let's say, bulk of ADAS systems is support to take away the wheel, to take away the need for you to take the wheel. And the second bulk, or what they are now recommending, is basically making you see-through in your decisions so that the system can learn from you. Because what drivers can do is still much better than what cars can do. And since it didn't really work out to skip level three, they're now trying to come back at it. So while the downside of ADAS systems that doesn't seem so problematic, it's all confusing, which interestingly actually has practical consequences. People don't know what they're doing, and thusly they're creating accidents. But also, it's misleading in terms of wording. And I had that to a point where they think they can take a nap while driving, which I found very interesting. And now, if we look at, we can move. If we look at the next one, you can see that there are recommendations, recommended escalating attention reminders for level two automation. And level two automation, again, we started from zero. We want to go to five. And we have level two right now. And it's already ending with the car taking over from you and locking you out. So what I'm trying to say is, first, they tried to erase the driver. And now they're doing everything to make the driver back up the system that they developed. And to do that, the consequences or the costs that come with it might just be not that desirable depending on what you're looking out for or have a problem with. And of course, I'm hinting to privacy later on. But even if we leave out that very massive, unbelievably massive topic, oh, and I didn't even go into, I think, observation and control in terms of the ambiguity, even if we don't think about the privacy issue, that turn in development shows that the hype is leading to developments, and I mean, massively and greatly pushed developments that might not just be that well-aimed. So if the first galloping in this direction leads in galloping in the opposite direction and still doing it with a lot of enthusiasm, it might just be a thing to notice. And it's not a grumpy point, all progress, but in a better, in a good way. Okay, where are we? Failure is a perception I've been mentioning, and I don't think I have the time to really explain what it boils down to. But here is, apart from the driver not paying attention, one of the reasons why this is problematic is because of the driver's confusion, but also because it's really unclear what the automated systems can do. And I've been reading up on it. Which line are we on? 17. Free mapping. 27. Hang on. Hang on, hang on. Okay, that's good. Okay, so localization. We're talking about localization right now. This is the my pre-mapping slide, which hints to the fact that you need specialized code for pre-mapping, and that seems to be a rather difficult issue. More importantly though, localizing, which is making the GPS signal, or which is complimenting the GPS signal with other technologies so you really know where you are, and not only in the range of 10 meters, basically, is this is a lighter specific problem, because it's about keeping the maps at a current. Sorry, I don't know about that. Well anyway, the pre-mapping, yeah, so the issue is that if the map changes, and you can see that up there, really frequently, as it does, that if it changes too much, you can actually lose your localization. And that is needed for the car to know where it is and so on. And that in the end, this advanced technology presents a drawback to the self-driving cars. And I mean, moving on the weather thing, we can just skip because we knew that already. Interestingly, there are developments that can see in the dark, but how fast they can ride and whether you want to sit in them in all situations is a completely different question. And now we come to, as opposed to the problems with recognizing objects and classifying them properly, which is perception, we talk about prediction. But again, even though impressive, it's an open problem. If you can predict the future accurately, then planning and how to react to those situations is easy to solve. That sounds like an A equals A sentence. But being able to predict the future actions of recognized objects in autonomous computing is an open problem. And that is Dr. Eustis from Toyota, who is the SVP for automated driving. The issue about this is that you have very specific problems that the car needs to solve just a second. And that is the semantic recognition of something, not only the understanding of the surrounding, not only the the percepting of the surroundings. So if you percept these surroundings, you can see people on the side of the road. But if you understand the surroundings, you understand the difference between teenagers who might be erratic and run onto the street to use the obligatory example, or an older lady and a younger child who very conscientiously wait for the lights to turn. And that's a difference that humans are much better in recognizing than cars. And if we look at the ethical problems, so we have perception. Once you've percepted correctly, if you want to distinguish it like that, you have to predict correctly what those objects going to do, which is a whole nother question. And then once you've done that, you still got some ethical problems, which are usually explained by the trolley problem. Now, due to a couple of reasons which will become clear with the next slide, I've put this in only as a joke, because the trolley problem, as it turns out, doesn't give us too much information for the development of autonomous cars, neither on the programming nor on the ethical side, although it focuses our attention. And thusly, it's not to be missed as a point or a topic. Focuses our attention on Kantian questions of responsibility and autonomy, or utilitarian questions, for example, like in Mills, you're the authoritarianism, which need to be thought through if we want to be able to structure society properly. We can't just leave ethics out. And that seems obvious, but I'm just going to make that point once more. And here you can see driver versus pedestrian and cyclist is another version of this. And this just roughly says what I just told you. There has been a lot of hype around the trolley problem. But in the end, the information that we get out of it is rather restricted as opposed to situations that could actually happen to you as a driver, which brings us to fatal crashes due to perception failure. And I'm just going to go quickly because we've made that point a couple of times now. There is the... Can I move on? I think we're basically through, aren't we? Okay, we've got the perception issues. Do you want to mention this and then I'll come to this bit? I think there is one more slide about security, but I'm happy to give over right now. So talking here, the next slide is around Uber. And this is really interesting. We talked about the fatal crash with Uber in 2018. And what was interesting here was when it was investigated by American authorities, they found what they called a cascade of design failures all the way through the process. The car itself had six seconds to determine what was in front of it, what object. It was alternating between different things. Every time it alternated between thinking it was a bike to an object to a person, it lost the memory of the movement of the person. So it couldn't actually adjust its position according to the situation it found itself in. And then when it got close enough, there was an action suppressing system that kicked in to prevent sudden movements, which prevented it from handing over to the driver. What's interesting about this is the level of failure to the authorities found there with Uber, and also the safety failures, the regime which oversaw the safety drivers. They didn't drug test. There was no oversight. And yet when it came down to it, they in the end ended up charging the driver and not actually essentially in Uber in any way. And coming back to this point, ORIS was making about the safety issues. Who's actually going to be responsible for a fatal crash? However safe these cars are, they're always going to create some kinds of fatal. It's inevitable the scale of automotive transport. But who's going to actually be accountable for it? And this is not a good precedent. So we talked about cybersecurity. Should I just quickly go through? I can do that one just very quickly, just because this is a very specific one. If you see the headline, it says a study on, can you put in the headline interview? Yes, thank you. Automotive industry cybersecurity practices from measured or assessed in an independent study by the Commission by the SAE International in Synopsis. Now Synopsis sells software for autonomous driving. So we can see where that is coming from, but those guys are trying to get it into boxes that we can work with. So those are the key results from this study. And this study is not only on connected or autonomous cars, but on new cybersecurity in automobile, in the automotive industry. And this is just, this point is just a bigger explanation or longer explanation of this one. So the three key points are software security is not keeping pace with technology in the auto industry. Software in the automotive supply chain presents a major risk and that's an issue that will lead us back to proprietary versus open software issues amongst other things because the software comes from third party suppliers and sometimes the OEMs have to superimpose things to them to make it more secure. And this guy, we can go into this in the discussion and connected vehicles have a unique security issues. I mean, we could all have guessed that one, but that's just what I wanted to throw out there because I found they do have some interesting questionnaires with people from the industry and from science as well. Okay. And then when we're talking about these cybersecurity issues, we can also, we need to talk about the data and privacy issues. Tesla currently on the road today is equipped with hardware for autonomous driving. And that means it has 8, 360 degree high definition cameras all recording constantly, has 12 ultrasonic sensors, it has GPS, it has an inertia gauge, even actually can monitors the pedals and the steering. All of that information is being shared with data sensors with Tesla and it can be recording even when the car is stationary and actually off, it's recording all the time. So it's not just recording the people in the car, it's recording all of its surrounding area and all of the people there as well. Research suggests that there's something like potentially a fully autonomous vehicle could actually be sharing 40 terabytes of data every eight hours. And then we have the cybersecurity issue. What if we have malware, for example, in a car? It's one thing if you have it in a computer at home, when that computer is in two tons of metal going at 120 kilometers an hour, that's a bit of a problem. And it's not just malware for the car. A lot of researchers are concerned about passive hacking. That is doing things, contaminating the environment with different information. For example, with road signs, which really screw up what the perception of a self-driving car system would be. These all could be things with fatal consequences. So there are massive issues there. I would like to add that we are aware of the fact that recording and monitoring is not the same thing. And monitoring something and recording it doesn't have to go together. It usually does though, because for a couple of reasons, mostly because if you wouldn't need the data afterwards, why would you monitor it in the first place? And those are things to think about. Sure. So let's see. Yeah. So having seen all of the complexities and difficulties and challenges, we may want to revisit why exactly do we need self-driving cars anyway? And one of the obvious answers we have to that, which is frequently said, is it's going to in some way be an immense boost to our economy. One report recently said it's going to add $7 trillion to the world economy. That's twice the size of the economy of Germany. And how is it going to do that? Well, when you look into the report, you can see where they're going. If they could say that there's going to be $3.7 trillion spent on mobility as a service, in other words, taxes, that's an awful lot of money. That's more money than the entire automotive industry generates currently, which is about $3 trillion. So that's a lot of money we're spending on taxes. Is that going to make us richer as an economy, as a society? It's hard to see that really. And likewise, they're looking at freight and transport and $3 trillion spent on autonomous vehicles. Well, that could be more efficient. But of course, we have a huge number of huge workforce employed in the transport industry. There are something like 5 million drivers professionally in Europe alone. What happens to them? Is this really a great idea for our economy? And fundamentally, is it actually going to make us that much richer if when we're in a car, instead of driving it, we're looking at our emails? It's hard to see that. Safety is the other big argument that's often made. This is taken from Waymo's website. It says 1.35 million deaths every year. 94 percent. This is a statistic you often hear when you look at things around self-driving cars. 94 percent of accidents are caused by human error. The implication being that somehow autonomous vehicles will actually address all of those. But a lot of researchers have questioned that and said, well, actually only a third of those accidents are going to be avoided by autonomous vehicles. Even when there are humans involved, there's nothing an autonomous vehicle can do about a pedestrian in the street, for example, to avoid the crash. So the idea of safety is a big, big question. It's an assumption, but there's no real data to support that. What we know is that autonomous vehicles can be reasonably safe in controlled environments, but that's not the same as a normal city. And then we're given this kind of vision. This is Berlin. This is a lovely Berlin a few years in the future as created by Daimler. And this is from a report from a synopsis, another company in this whole self-driving car industry about how it's going to reduce congestion. This is a very popular argument. Also how it's going to cut transportation costs by 40 percent. Hard to see how it's going to do that with the cost going into the research and how it's going to improve our walkability and livability. This congestion issue keeps coming up. And the idea presumably is that if you have a whole fleet of autonomous vehicles, they can just drive bumper to bumper at 70 kilometers an hour and be hugely efficient. But it doesn't really work like that. For a start, you've got autonomous vehicles almost certainly for the next few decades, even if they exist, working with normal traffic. How is that going to be more efficient? An evidence suggests that actually autonomous vehicles could increase traffic congestion, as people start using them for completely frivolous journeys if they're not actually in them. And so the issue around congestion is very questionable indeed. And indeed, traffic planners also point to public transport. Highways today can carry a maximum of 2,000 people of cars per hour. If you're very optimistic about autonomous vehicles, you could possibly quadruple that, but that's really stretching it. But a good public transport system will transport 50,000 people per hour. And there's no way, as this urban planner says, no technology can overcome that basic geometry. Maybe just as a side note, if automation could eliminate all involved driver-related factors, then that would help a lot, but it's a big if. And there's actually numbers out there that show that even with the increasing automation level, you don't get that much out of it. It's like there are numbers. I'll have to check it out. But the point is that those assistant functions work well, either on higher speed roadways that are so framed that it works anyway. And that's not where you get usually the crashes. Or even if those ones were excluded, it would still just mean only 17% fewer deaths at 9% fewer injuries or something like that. So we'd have to look at the numbers properly, but this is an interesting thing to note. So one of the other challenges here, as we've said, is it's so difficult for cars to perceive their environment. And inevitably, because of the industry and the scale of it, you're seeing the alternative. And this is from Andrew Ng, who is one of the most prominent AI researchers in the world today. He wrote an article saying self-driving cars won't work until we change our roads and attitudes. It's up to us to adapt to them. And this is going to be increasingly allowed in the years to come. As one transport expert put it, the open space is that cities like to encourage when they end as the barricades go up. But movement would need to be enforced with similar to poor style authoritarianism. Maybe that's why we're seeing also a huge amount of hostile action and human beings being really quite cruel to robots in self-driving cars. This is from Arizona. There have been other cases as well. But there are other issues we need to think about. And if you want to look at the scenarios, you can see that there are a lot of people who are going to look at the scenarios. Something like this video coming up. It's one of the places where you need to worry about where self-driving cars go. Oh my God. Oh my God. Oh my God. Oh my God. Oh my God. Oh my God. Oh my God. Oh my God. Oh my God. Oh my God. Oh my God. Oh my God. So as you might have expected, that is from the recent fires in California. Something you would think might be present in the minds of a lot of people in the self-driving car industry as they're all based around Silicon Valley and probably encounter problems with fires over the last few years. The real issue here then when we're talking about driving is not who drives a car, but the fact that we have any cars at all. And so there we are. And the fact that we have 1.4 billion cars currently on the planet. Anything we do with cars is going to be unsustainable. No matter how we change the technology that drives them. And of course what we assume or what the self-driving car assumes is somehow or other, it's going to be fine because they're all going to be electric. Well this is a lithium plant in Bolivia and admittedly it looks actually quite pretty from here. But you've got to remember that each of these pools of evaporation has toxic waste in them. And lithium extraction is like any other extractive industry. It is appallingly destructive to our environment. And the places where it happens, it has a huge cost. When you look at lithium, there is a vast amount that we require. Currently, if there are 1.4 billion cars in the world and we change them to lithium, that's 12 kilograms of lithium per car, that's the normal amount at the moment for say a Tesla, that's 16.8 million tons of lithium. And yet we have 80 million tons of so-called resources, known quantities, but only 17 million tons of reserves, those that we can actually extract. In other words, all the lithium we know we can extract, we would have to use for self-driving cars. That means there's nothing left for your mobile phone, they'll have to go clockwork. And that's 10 times the production of lithium that we actually produce today. And of course, lithium is not the only element we need to look at. Cobalt as well, there's a kilo of cobalt in a lithium battery. And that comes primarily from Congo, which is the centre of some of the world's worst child slavery situations. So again, you've got this real problem of locking us further and further into an extractive industry, which is fundamentally unsustainable. And even when you look at the carbon, it's not so clear that a lithium-powered electric vehicle is somehow going to be more sustainable than a normal combustion engine. Researchers in Germany have found, for example, that in actual fact, over the life cycle of a car, that is for the manufacturer, as well as the consumption of energy while it's in service, to the actual disposal of the car, the carbon impact of an electric vehicle is probably just as much in Germany because of the dependence here on fossil fuels, on coal power. Other research has been more optimistic about that. And it's found that, for example, if you look at the whole life cycle, a standard average petrol car is going to produce something like 250 grams per kilometre. Whereas a Nissan Leaf, one of the lightest forms of electric car available today, is 142 grams per kilometre. So a lot less, admittedly, but it's still significant. And of course, this is one of the lightest cars. But electric vehicles are having another issue, and sorry, autonomous vehicles are having another issue, and that is on impacts on public policy today. And here we have a situation in Camino in California, and in Camino in California, they try to introduce bus lanes. And those bus lanes were overrun by people saying that they're going to be antiquated, and they're going to, we need to wait for self-driving cars. That was in 2014. They're still waiting for them. They still don't have any bus lanes. The same thing happened in Detroit, and in Detroit they had a referendum which was overruled. And here you have, and this is at the heart of it really, this is from one of the leading venture capital companies, don't build a light railway system. Please, please, please don't, says this person from Anderson Horowitz. We don't understand the economics of self-driving cars because we haven't experienced them yet. Let's see how it plays out. And here you can see even Sundar Pitchai here talking about, this is in the last few weeks, how Google is helping with climate change, how it's using AI to address carbon impact. Nothing here about the fact they're plowing tens of billions of dollars into a technology which is going to take us towards climate change. So the reality is we've got one option for safe cities, and that is to take cars out of them all together. And what we need to consider is why we are going the self-driving car route in the first instance, why we're not, it's like other technologies which the tech company tend to push on us, like going to Mars, like cryogenics. These are things that belong in a teenager's bedroom. And so with 100 billion dollars we could do a hell of a lot more. We could build cycle super highways, we could spend 10 years of free public transport. So this is the real future of the car, autonomous or not autonomous. This is the way cars can contribute to a sustainable future. Thank you. Thank you very much. We have time for a few questions because there are a couple of them. So the first one is, is it more a liability issue or a technical issue that no autonomous vehicles are on the street yet? That no autonomous vehicles are on the street. Well, first of all, we don't, well, it's technologically, if I may, it's technologically not yet possible due to the fact that it's always restricted to the geo-fenced areas where the maps are pre-mapped, pre-built or where the web account harm you. So truly autonomous, truly autonomous, or let's call it self-driving cars are not out there for the technological reason. But we can be very happy about that because the liability, even if they're trying to grasp at it now, is not at all done. And it doesn't look like they're going to develop it in a way that we humans can just lean back. We rest of our humans that not necessarily depend where the millions go right away. And the liability problems are so intractable, it's hard to see where the solutions really lie under our current regulation systems. So both. Thank you. Next question is, are regulators or insurers worried about the danger inherent in human passengers who aren't paying attention, only needing to take control in extreme conditions? Seems far more dangerous than requiring constant attention. Nice one. Yeah, sure. I mean, this is absolutely correct. I mean, when you look, this is a real problem with Level 3. We went through the different levels. Level 3 autonomy, which seems technically most achievable. That's what Tesla is aiming for. The real problem there is that it's very hard to get drivers to pay attention if they're not actually driving. And research has shown time and time again that as a result, the reaction times when something does go wrong are that much slower. And this is a massive, massive problem. And I think it was highlighted in the scale that you showed. But this is a big reason why, for example, Waymo, other self-driving cars are going straight to Level 5. They actually find Level 5 to be an easier technical challenge than trying to address this human interaction problem with Level 3. It did before they couldn't erase the driver. And this question has another beautiful connotation. It goes into the direction of what are they interested in? Are they interested in saving the car or the human or roughly something like that if I heard that correctly? And that's a nice one because that's exactly what this is so interesting about in this turnaround that they made. First of all, they're trying to skip 3 and then they're trying to make the driver see-through so they can make 3 and then 4 and 5. And it's so weird because as I said, it's never the everyday or I hope I've said, it's never the everyday situations that are a problem with the autonomous driving. It's always the interesting out-of-order situations that then the system is supposed to learn from the driver in the situations where the system wouldn't have ever decided like that, but the driver did. And that is just 160 degrees around and then another 180 more to go into the other direction that you've been going in. And of course you want to erase the error source human and then you need the human not to do the worst errors ever, which is just a complete turnaround. Okay, there are quite a lot of questions here. I'm afraid we won't have the time to ask them all. I'll take one, but before there's already the question, where can we discuss this further? For tonight I would recommend Meet in the Pet because that's showable. Thank you guys. And this is why I'm so glad that you mentioned it beforehand is basically a spanning possibility to do colloquia and conferences on all the kinds of topics that you're interested in. And I'm going to do that talk in a different version again at Fankoko and I would be delighted if you guys came and brought in your expertise. And I'm sure Elderser is going to be there and apart from that email and meet us. Let's follow up on chat afterwards. Okay, then thank you for me and also there are a lot of thanks and that's the best talk we heard in the chat here in the Pet. And a lot of questions. So I'll give you the link afterwards and you can chat it out with them over there. Thank you for the talk. Thank you.
Estimates suggest that well over 100 bn € is spent on autonomous vehicle research, or what we might call the “Technology Mobility Complex”. Over recent years dozens of high-profile autonomous vehicle projects claim they are tantalisingly close to launch, only for those projected dates to be quietly pushed back. This talk will critically examine the inflated claims of the self-driving car industry and argue that the hyped economic and social benefits are based on unproven and dubious assumptions; furthermore, that the intractable paradoxes self driving cars present between ethical goals and technological goals; centralised and decentralised systems; as well as data availability and privacy make the challenges of realising fully autonomous mobility all but unsurmountable for decades to come. Drawing on leading research in the field, the speakers will argue that: The ethical and technical challenges of autonomous mobility are deeply inter-meshed. The very concept of an “autonomous vehicle” deciding ethical situations, as defined by the tech and car industries, is flawed and in itself a barrier to progress. The challenges include intractable ethical dilemmas (I.e. the trolley problem), which are currently unsolved, bearing a significant risk that the tech/car industries will use their economic and political influence to override them. Also challenging is, that autonomous vehicles will necessarily be prodigious data collection and surveillance devices and could violate privacy on an unprecedented scale. Even now with only “level 3 autonomy”, every Tesla on the road has 8 HD cameras and 12 ultrasonic 360 sensors constantly collecting data – estimated at around 25 GB an hour – that is shared with Tesla data centres. Another challenge is the distraction from more urgent mobility challenges, that threatens to lock us into a mode of transport that accelerates further towards the tipping point of global ecological collapse. Self-driving car technologies will necessarily depend on connected systems; the more assets are feeding the data collection, the better the algorithm and system´s abilities will become, so the “technology mobility complex” might simply represent a new arena in which the tech companies can invest super normal profits to extend their monopoly platform power. Alistair Alexander is a researcher, trainer and campaigner on the links between technology, society, ecology and art. At Tactical Tech he led the award-winning Glass Room project, exploring data and privacy with immersive art in pop-up spaces, reaching over 120,000 people worldwide. While studying Philosophy & English at Humboldt University, Auris-E. Lipinski became a scientific assistant at a Berlin based IT Company for optimisation, monitoring, planning and data analysis, where she had the opportunity to get a deeper understanding of telematic systems, sensor & map data, thusly connected computer systems, and the fields these technologies can be deployed in. She founded the PhenCoCo project for scientific discussions in the aftermath of seminars like "Konstruktion und Phänomenologie der Wahrnehmung", Phänomenologie und Kognition" (M. Thiering) and "Computation und Geist" (J. Bach). She has been involved in different research and development projects, guiding her academic interests towards way finding and cognitive preconditions for navigation, both computational and phenomenological. This includes working on spatial concepts found in philosophy, psychology and robotics, subsuming Gestalt theory, embodiment theories, language/ concept importance, association and intuition. It also includes a continuing interest in programming methods and environments, as well as machine learning algorithms.
10.5446/52055 (DOI)
So the idea behind Horcrups encrypted messaging is that we, since the revelations of Edward Snowden, have learned about massive drag net monitoring of different communication channels that the NSA has access to. And also probably every other major nation state has access to messaging platforms either directly on the back end through zero day vulnerabilities. They potentially have infiltrated the crypto systems as well so that even some things that we think are secure they have access to. And so they have access to a lot more messages than we originally thought they'd have access to. But one thing that is happening now is that it's not entirely one pole world where the US is running everything. We see increasingly that Russia and China are running their own sophisticated technology stacks and they have their own equivalents to the NSA that are swine on their own citizens and other citizens. But there is probably maybe the NSA and probably no country that has access to everybody's platform. And this is the key part of the system that we're relying on. Fundamentally, how do we send a message from one person to another knowing that these security administrations have pretty deep access to any given one messaging system? Well, if we can split take our message and split it up into multiple separate messages that we send on every possible messaging system that we think is independently secure, one in US, one in China, one in Russia, then if unless some system has able to attack and penetrate all of those messaging platforms, then we can make sure that that's secure. And I just described a specific way to do that in a way that's pretty simple and has a lot of security guarantees, even if things like private private and public key encryption are hacked. So here's basically a visualization of the design. So you'll start with each of the center and the recipient have each having what we call a magic wand, which is basically just a simple device that where you can type in a message and then encrypt that into multiple sub messages, or we call them horcruxes. So imagine like a really simple device manufactured in the US or in China, and it doesn't have Bluetooth, it doesn't have Wi-Fi, it doesn't have any external access, but you can type in a short message and click encrypt and it'll show you three different other messages, one or two, at least two different other messages that you can send on independent messaging channels. So you'll type on this device, hello, hello, Arthur, and then it'll give you a scrambled text and another scrambled text, and then you'll send that first scrambled text on one channel. For example, you could send it to the signal app on an iPhone. And then the second message, which is also a scramble, will be sent through, say, WeChat on Huawei, and that's more of a Chinese centered application. And so the idea is maybe the NSA of the US has some kind of back doors into signal or something like that, or back doors into iPhone, but they might not have a back door on the highest model in Huawei, which the Chinese bureaucrats may use, and they may be using WeChat to communicate with each other. The basic idea is somewhere like the elites in the US are using signal to chat in an encrypted way, our bureaucrats, our congresspeople are talking, and our financial people are talking on signal, and the Chinese state has some secure method that the NSA is not spying on in China. And so if we can send messages on both of these channels, then there isn't any one government that can reassemble the whole message. So basically we send the message on these two different platforms, it's received on the recipient's end, and then they can take each scramble message and put it together onto, again, their magic wand, and that again doesn't have internet access in any way, and then assemble that message that says hello, Arthur. And we can go into the details of how this is designed to be very simple and very secure. But that's the fundamental idea. If you separate the messages out, it's going to be hard to figure out exactly, and maybe even impossible for any, for almost any government even to access. So I'm going to scroll over here. So the main secret, main thing that's going to make this very trustable is that we're not using any kind of complicated RSA, like prime factor based encryption that which could be broken. Maybe the NSA has quantum computers that can break some of these encryptions that they haven't released publicly. So we're going to use a very simple encryption. It's called a one-time pad. The nice thing about a one-time pad though is that one-time pad is perfect secrecy. It gives you no information about it. There's no, you know, fastest computer ever in the world can't hack, can't figure, can't crack a one-time pad. So by using the one-time pad in the magic wand to encrypt the Horcruxes, we have perfect secrecy such that even if you break half of the Horcruxes, are almost all of the Horcruxes, as long as one is securely sent and hidden, then the cracked Horcruxes reveal no information about the message. The other important thing about using a one-time pad is that it's a very simple, very simple encryption system. A lot of crypto systems have been broken, for example, SSL and Heartbleed, such that they had programming errors in the way that they were implemented. If anything is, you know, uses non-trivial math, there are many different ways that the different matrices could be corrupted or just, you know, some of the loops could be set up such that there's a way to sneak it a bug. We want the magic wand to be very simple, maybe even programmed into hardware and do something very simple. So the one-time pad just uses XOR and it can take a zero and a one XOR it and it's a very simple function, varies it to audit the code to do that and we can trust fully and audit fully that the magic wand is very simple and that you can both encrypt and decrypt very safely. Here's a visualization of maybe, you know, possibly the different Venn diagrams of what is already cracked. Probably a lot of email servers and email lines and maybe very simple basic SSL is already very accessible to the NSA and to some of the hackers in Russia and China. And WeChat probably is very compromised by China and possibly none of others, perhaps that they don't want other countries to be able to access the WeChat so it's private citizens and that might be very safe. So, you know, as we investigate more we might find out that there is some number of different channels that you can use such that there isn't any one entity except the sender and receiver which can find and get all the messages and then figure out what the different Horcruxes are to say to decrypt it. And there's other things you could do because you can basically with the one-time pads very simple to break it up into any number of Horcruxes you could send it along all the different channels. You could even send a Horcrux in plain sight using Stegonography. Stegonography is the the art of hiding an encrypted message in plain sight. For example, you can put it into a photo and just into the noise values of a photo. So, any anybody who's trying to read what your Horcruxes say won't even know how many Horcruxes you have and where they are. And then even if they hack your signal account they will know that you also hid one of the Horcruxes in an image that you tweeted in some of the subpixel values. So, whereas you're a receiver you could have pre-planned a way for them to receive messages that they could start to reassemble the messages in a way that an attacker might not be able to figure out. The other really nice thing you can do with this is a lot of people including Jeff Bezos has been attacked with zero-day attacks on the OS itself which means that even the most secure messaging apps like Signal or Threema who are both open source now, they are vulnerable to the operating system and there's really nothing you can do to get around that. They're very complicated applications. However, Horcruxes can give you an option to mitigate that a little bit because instead of relying on any one single app or any one single app running on one single OS you could break up your message into equally secure pieces and send it on multiple OSes. So now you've at least increased the cost of an attack. So, instead of having to just crack the WhatsApp or crack the iPhone now an attacker would have to use attacks both on an iPhone and Android phone and you know whatever email server and be able to spend many millions more dollars on zero-day attacks to find you and even then they won't even necessarily an attacker won't necessarily know if spending a million dollars on a zero-day attack for an iPhone for your iPhone will get you to go get them all the Horcruxes. Maybe it only get them one of the Horcruxes but you've hidden those other Horcruxes somewhere they haven't even realized where they are yet. So you've increased the cost super linearly of cracking a phone in a way where if if anybody was relying on one single messaging app or one single OS it just has no way of preventing. I'll go into the magic wand. So the magic wand ideal in an ideal world is a very simple piece of hardware simple eating screen no network ports no USB no Wi-Fi no easy way to like accidentally have a side channel attack and all it does is is you can type a message and maybe the only way it can communicate is through a QR code maybe you can scan a QR code to actually get the message from that magic wand into your signal app and then you just send that and I've actually built a demo that works with it it's very simple and then possibly if you want something less secure you could have that app use the local Bluetooth connection but that does open possible new channels of attack although you would make it that would make it easier to use. Another option to make it even easier to use is to instead of using a separate piece of hardware you could actually have a magic wand app on one device and this is this is what I kind of tested and made a demo of obviously that's much less secure because you put your risk back into one single OS that is connection to the internet and as you know zero-day vulnerabilities are a major issue but you still get some of the benefits if your attacker only has access to the network and doesn't have access to zero days or doesn't know how to find your device then an attacker that only has access to the network might not be able to to gain access to your phone in which case you might be able to use the magic wand as an app on your phone and just send it through different messaging apps. So I go over different threat models that this involves for example that it could be a nation state observer that's looking at your cipher text perhaps they have a passive attack to like read your messages or maybe they can even modify your messages they can maybe compel providers to to provide information on the decryption key so that anything encrypted with a private encryption key that they have access to they would be able to read. Obviously this doesn't cover every case but it covers a lot of cases and then there's a lot of different ways that this could fail for example you do need every Horcrux to reassemble the message which is a positive and that the attacker would need to receive it or to get all your messages but it's also a downside because if you if they could take they could denial service one of the channels and then you lose one of the Horcruxes and now your receiver can't receive the message and normally when you when you do that when you lose availability then people use less secure means. One way to avoid relying on all the Horcruxes is to use a much more complicated encryption system instead of using one time pad you could switch to some mere secret sharing which is maybe you only need m of n of the Horcruxes in order to reassemble the message that's totally something that you can do it's a little more complicated to program so it does leave potentially the possibility of some software bugs much more complicated than XOR but it is a way to get some more robustness in sending the messages. One vulnerability is probably the magic wand would be manufactured in China which could leave a potential possibility that China could some the Chinese government could put in a not very good random number generator for the one time pads into that device if they could put in other pieces of code or maybe even bug the device so if you're not familiar with that code or maybe even bug the device so there is a risk there of not having many you know a piece of manufacturing of the magic wand done in in your country of trusted choice. Yeah and there's a bunch of different on this website Horcrux and crypto messaging.jprola.com you there's a bunch of different ways that things could be attacked but I'm curious to see if anybody has any questions about how this could work and what what they would like to see and basically I'm telling people about this to realize that there are other possibilities to get around some of the massive massive security vulnerabilities that we're starting to see through a very deep network access and very deep zero-day vulnerabilities on OSes.
"Nation states can break some encryption, hack your device, and spy on all communications with their dragnets. How do you send secure messages leveraging adversarial nation-states?
10.5446/52056 (DOI)
Thanks for watching! Some of his experiences were also about what's going on there, the ideas that underpin this revolution, this anti-capitalist, anti-patreonist and ecological revolution, democratic revolution. And yeah, just to share a bit about your talk, the title Hacking Democratic Modernity. Hackers in democratic democracy, how to live and what to do. Great questions to start with. So welcome there! Thanks very much for having me and for the invite. Just to the beginning I want to give the talk in the name of, but in the meaning of, I want to remember. Stop. My zoom left. What happened? You're an error. I think maybe it's the battery or something. Okay, then I'm sorry. My zoom stopped and I didn't know what was happening. Because it's a tradition and I think a really good one in the Kurdish movement that's also worldwide. Yeah, I want to give the talk kind of in the name of Anna Campbell and internationalists out of internationalist women out of the UK who died in the 15th March, 20, 2018 in Africa in the struggle against against the Turkish fascist proxies and the Turkish army. And also, because this is happening right now, I want to remember the five comrades who died yesterday in the fight against the Turkish proxy army in Ayniza, which they attacked at the moment. So this is at the beginning. She didn't know. Which means martyrs never die. And yes, so first a little bit about me. And I'm coming out of East Germany I'm part of your part of the anti fascist autonomous movement, a long time and got in contact 2015 with the Kurdish movement, why is it struggling in Kobano. And yes, all of this also some time go on I was made the decision to go there to see the fight because it's like really to go to war. Because this is like really a whole beacon in this world and to learn there what makes this revolution and what prices also to see the values and so on. I'm also feeling really connected to the and because of that was cool that I could be here really connected to the ideas of cyberpunk. I read a lot of the way and Gibson and so on. And the idea of fighting actually also, and this techno cardiac system from within with with like, like all this stuff. And I also like influence me a lot. And so, and I'm also like an IT professional. And so this was this like my two persons like on the one hand I'm like, anti fascist kind of anarchist guy and on the other hand I'm like, I really like to play with the internet to to go deep and to find the other hand but also like see the really big dangerous laser. And so this was my perspective of going to war. And, yeah, so, like to start with, I maybe should explain some. Yeah, some of the words are using. And the first thing is I want to explain what is a hacker. Because I think this is for me hacking is not something which has to do so much what you with what you have so hacking is not something just it people do or hacking is not something you have to be, you have to know how to program hacking is for me more like a view on the world. It's more like to see something and the system tells you for example this is an umbrella. The give it in the Kurdish mountains thesis umbrella as a cave to hide them from from the mountains. So, for them it's not an umbrella in the capitalist modernity. For them it's like, it's like a tool to protect themselves and that's the second to me. And so, there are a lot of other examples of hacking, especially in the struggles in the global south but also here so it's more like how you how you approach the world how you see the world. And, and yeah, how to, yeah, how to ask questions actually like how to repurpose the meaning of an object is into, I don't know, revolutionary purpose. Yes. And I think that there we come actually to a quite interesting point because it's like everything like capitalist modernity and democratic modernity is I like to to terms, which are that maybe it goes back to Damanесте? Yes. And Ikadani, from Ahran Shevan, O Yuan an being Abdullah, O Yuan. Yes. What are the leaders do of the Kurdish movement. Yeah, started the movement years and... Yes. And he, he envisions these terms in his books I don't want to go too deep because I'm also not to take too deep I don't want to go too deep because I'm also not too safe to do this. And, but for me, this they're connected to heck to heckling because it's also like, you know, everything around us is built by capitalist modernity. Every tool we have, like also handy, the internet, everything is like, it's like, it's like built by the capitalist modernity. But the beauty is within that there is the democratic modernity. That is resistance. Yeah, there is a resistance. Like there couldn't be a planet or a society where there's just capitalist. It's like, there wouldn't be a planet. There wouldn't be a planet actually. There would maybe be a coin flying around in space. I'm not sure. Probably really ugly coin. But so like in everything you see around you and also this is also about us, like everything what we are, we have like this capitalist side and we have like the democratic modernity side. And heckling for me is like to also a way of freeing this and like find like the beauty in like really ugly things, for example, there is like these internet sticks you use to get internet. And then people were looking at it and were like, no, this is not an internet stick. This is actually a device which you can use to transfer a lot of you can use it for free. And so they were hacking it and we're like seeing the full potential. And I think this is what capitalist modernity is doing with us. They say we can just go one way. They say that we just can do like really bipolar way. And actually there are so many ways and so many possibilities. And so yeah, I hope it's kind of getting clear. So the idea of like how sometimes capitalism homogenizes reality. And like this and as an opposite, you were talking about democratic impulse that exists in life, in everything that is kind of a reflection of the diversity and differentiation. And how one object can be used for many different purposes, for example. Yes. And this you so in this way, when I was going to whatever, I saw like a lot of hacker. That's happening there. Like on the one hand, you have like, you have like really problematic situations that you don't have so much tools or so much resources to get. So like people are getting really creative. But on the other hand, like you have this strong ideology, you know, this strong knowledge, actually how to live because they giving the answer there and how to live. Something we don't have here. And so combined with like this or creativity, that's actually like a big force. What's the problem there is that and then we need to talk about colonialism a little bit more. Is that what like how how it is working is actually to take like the resources from the colonized people and put it all in the center. And me as I'm coming from center for me, it's like I have had like all these resources. You know, I have knowledge, I have like a PC and really early stages. So I could learn a lot and also like approach these technologies, which are used to express people. But I could have my own approach. But in Roger, you often find people who had not these access. So you have actually like great hackers there. But unfortunately, they missing some crucial tools to defend themselves against against democratic against capitalist modernity. And these tools are here. The tools are laying in the centers. And then you actually get to the point where you come to like, OK, this is like the point where you get internationalists. Because then you realize, OK, we movements like in Roger, but also like you. It's also like happening in Mexico, for example, if you look at South America, it's happening all over. I think we all said like the talk about Guatemala and burning some nice Congress. The Congress there is also happening there. So you start to realize, OK, to really overcome this, we need to work together and we need to connect. And I think this is like one of the main main learnings on there was OK. They have in Rosreva, you learn to live in a way because I never really learned it in that way. And we also talked before a little bit more about how hard it is to come back in this reality here. Because internationalists who went to come back. Yeah, yeah. And because then you actually have the knowledge about OK, how could you live? And then you need to kind of fit in this reality over here again, which attacks you. And so, yeah, the point is to connect. And I think this is like a thing for like hacker communities in the central places of this world. And Berlin and Germany for sure is one of these central places. And I think there is a really need to connect to these places. And that means to go there to learn as equal. Yeah, it's a comrades for sure. And actually, you go there and you need to listen a lot because we used to know. We used to talking and thinking, OK, we know that we are at the forefront, which is not true. As the people in Rosreva, the people in Scherpers, they are actually at the forefront of the fight against capitalist modernity. So they have some crucial answers and we need to find our world in this. And our world is. Listen. There, because this is like because when we do it the other way around, we are being imperialist again. But then come back and start to build up here with this knowledge and build equal connections. And I don't be sabotage. This is in Berlin, for example. Yeah, for how to build up communes in Berlin and do this with this approach. But also have in mind, OK, we need to share like knowledge. So there is this great HECA space movement around Europe and the US. And I think it's also slowly spreading in the global south. But I'm not 100 percent sure. But this is like a thing we need to do. Why is there are other people going to Rosreva and build up HECA space? This is like a thing where you know where you transfer the knowledge. And like this is a really. Yeah, because. Yeah, because when you when you are there, you are like me on of a technology perspective. I was there and was expecting, OK, let's see, Freifunk and OK. And you actually having like the control over your infrastructure. Let's be infrastructure, which is not capitalists. And then you realize, OK, now the knowledge is not there. And the infrastructure is difficult to run without this knowledge. And but to run this infrastructure, this is the need of the society. You need resources, you need knowledge and you need the support of others who can help to mediate that because there is a embargo. Yes, right. So there are no resources. That's for the context for people who don't know what is an embargo from the sea and region, but also Turkey is attacking them. And so they're kind of like in close, right? So so the cameras over there need to make like deals with people who they actually don't want to deal with, but they need to run the infrastructure. So and and then you you come back to Germany and then you see all this nice talks about people who actually have ideas about that. And then you think, OK, this needs to connect because otherwise how is this going to work when we we need to share this knowledge? We need we need to go there and actually also I think this is another part. It's like to also see because when where we now in the world, it's like we are at a break like also versus whole corona thing. It's like burning more. And I think we soon will be at the point where we in like the Western world where we need to make a decision. And and we need to also also we need allies to fight. And we need places to go when stuff goes really weird and bad here and doesn't look really well. So for our resistance, we also need to watch over. And this is the same way around, you know, there is no. It's a play. Yeah, it's you need to look at it in a holistic way. And and it's not going without anything. And a problem I really see in like this hacker communities here is like really there, you know, they look at solutions, which is fine. They look at solutions of problems they have. But these problems are not necessary problems as a struggle to the global south. Yeah, exactly. And oftentimes what often happens is that technologies try to build technical solutions for political problems. And it should be the other way around. We should look at the world, the political problem is and then think about, you know, what sort of technological infrastructure can we build to solve that? To support that process, right? Politically. Yeah, yeah. No. No. But this is also like this is another big point which you also see like in Roger, but there you there's it's like different around, you know, as a as a group. The people decide and like the needs of the people are the other thing which needs to find solutions. It's not the other way around that you you you say, OK, we this is your needs now, like how we have it top down. It's like how we have it here. It's like you see all this propaganda and telling you you need to buy something. This Corona thing is a good thing. Like before Christmas, there was this saying that it's like the patriotic thing you have to buy now. And this is actually not the need of society. It's the need of the of corporate. Yeah, corporate powers. Corporate. How do people decide then locally, communally from the bottom up? Can you tell us a bit more about this process? I can try. I actually was myself not so much involved in in this local community works. I did other works, but what for what I know and what I saw and what I heard from comrades who were actually in these works. It's like really you have like. You have like every street, every like small neighborhood has like they need. And then they discuss problems and they have like these these data structure. So like a council or an assembly or something. Yes, we are then these problems are discussed and there is this strong notion that like problems which can solve in the community itself, it will be solved there. Should be solved by the way. Yeah, and for sure should be so. So far where you can see it really good is like the self the self defense unit. Which kind of everybody has like their own self defense unit. You got a little bit confused by the time because they also changing names often for whatever reason. But so you have like in every every like community, the Christian community has like the self defense unit and there are like the women's self defense unit and so on and so on. And they are controlled by the local people. In the sense because they come from there, there are other parts of self defense, which are then more like on the other stage like this is more this is a different kind of self defense because they are fighting against ISIS or like the Turkish the Turkish army. And so this is not like controlled directly by the local people because that would mean that the Turkish army is not controlled directly by the local people because that would make sense. But like for the local self defense or for like the demonstrations as a suspect is so interesting because it means that in the society what we would call the police is actually controlled by people. They didn't even police they just they just are in the vector. Yeah, like the people basically respond to the people instead of bureaucrats of the politician or how it happens here. Right. Yeah, that's very interesting. I'm not actually sure if I'm allowed to share the story but I don't share it anyway. Because it is like one story I really like to which for me like brings brings like the understanding of police and so on because this is like a contradiction also because like in 2011 it changed but for example people who were police before were in a kind of also in these policing positions after the revolution in different kind of structures not all of them but so the mentality of like being the police being in charge and telling the people is sometimes to them. So and so there was but they are also like the local self defense for example demonstrations there's not police there are these happy self defense units. Yeah, the self defense units of the people. So when you make a demonstration then the self defense units of the people are around and not like like a broader policing unit which called as a each and so they were like demonstrations. I think it was about sugar or something. I'm not sure. And it was like big demonstration and people were really angry and so these other each forces came and we're like because it still had like the kind of the old mindset where like oh you are not allowed to do this demonstrations here and this is like dangerous we forbid the demonstration. But luckily there was this local self defense forces would say well you want to forbid this demonstration so now we lock you up. But I think it wasn't coming to look and so the safety fence unit of the people are locked up these other each forces who actually wanted to forbid this demonstration and then I think it was like. Going a lot of chaos around and then the end it was all so but this shows a little bit. The more kind of autonomy of people. Yes, so because there was a police unit who actually was still like oh no we can decide and then but yeah when you have like your own self defense you can actually do something about it and not being like like it's often over here. And beyond self defense there's also like food cooperative there's also like different types of organizations that are all about managing like this structure of what the most decision happens in everything. They try to implement it. You know, they are like there are parts where this is not implemented in this way yet. And I'm actually like I'm two years here now I'm not 100% sure there is the whole information center which have like some great resources and they have like a PDF. I really recommend looking up for information center for that because they are much more deep knowledge they make a lot of interviews with people and so on. So, so yeah this is not really my expertise. And, but yeah you, you have like a lot of corporate around, especially women corporatives, because this is like the main value of this, like friends over there, also like it's a woman's revolution and then you see the progress, the most and women are on the forefront of the fights. And I think this is also like really important part. There are only also only women groups right apart from the mix groups, right there are only women groups that decided to have autonomy or many decisions. Yes, that's, that's like there is an autonomous structure of women, which, which is completely autonomous. And there is this system of co share. So it means like positions that is always like 5050 like you have like, you have not just one head of a city or something you have like a woman's head and co president. Yeah, co president. And that's everyone. And this like a principle you see every structure and I think this is like bringing. Yeah, this shows really how this is going and you actually see the structures and the corporative who which have like most developed in the sense are like women structures and women corporates. And it's so interesting. And can you maybe tell us a bit about like the everyday life when you were there you say you were there eight months right. Yes, so every day like how was the how is the life you know you were in the international is commune doing work. But how does what does that look like? Well, like my everyday life is not like the everyday life of the people in Washington to make this clear. I'm like in international commune. So I'm like so you have like different stages of being there and like but normally you sleep like together like in communal places and we have like common life. So you sleep together you wake up and depends on where you are scheduled. You may be responsible for the for the day. You rotate the cooking. Yeah, it's a cookie but every day someone is responsible for cooking and then you have to cook all day long. But so you wake up and you make spots together you clean together then you eat together and then you have like a meeting where you share work like it's about who was doing what what kind of work needs to be done. Yeah, and then they're like different works like there are something in the commune we had like the environmental works. We had was all about the garden but also it was like there were words with the society where people teaching boxing lessons or something and there are also like media words or writing something or making like like a lot. So you're doing this works and come together in the evening again and then you have like smaller groups. It's called taking you where you. Yeah, kind of reflect over the day and not in a really long way it's more like in a way of you because often when you go there you had like some educations and there you learn to like see for example Patrick is the hey they are like individualist behavior and a lot. And you in these segments you are you need to reflect okay where do I have this over the day or where do I see it in my camera. And so therefore there you have the tech and tech me is called taking me. Yeah, the critique and self critique sessions. But it's actually like there's misunderstandings tech me is already short. So it's like. And this is also a critic we often got in the commune that our techniques went like for an hour and so and cameras can say hey taking it for five minutes. And so you see also the different mentalities. So, but and this is kind of like the daily structure in the commune. And this is like also the communal life and so these structures you find like how to live in this way, you find in a lot of places where you go. But this is working within within the movement. Like for families or so structures are different. And they live they normally family life in a way. But this these principles are carried out everywhere. You find these principles also like in military structures also like in. Yeah, in the cooperative structures when you're like working the corporates are also there you have taken you and you're also there try to see this and you also try to in a common way, like, share the work. So interesting because I even heard that the army or the sort of defense forces are even democratic. So, but I don't know how true that is, but I heard that they sort of kind of have this technique and then decide who is actually the leader. No. I think it's nice. I think it's nice. And like there's often the story told we're like a small girl in Koman criticize actually the leader, the lighting, the committee, it's called the leader or the commander, the commander of the forces in Koman with a really deep criticizing. So, so this is happening. But doesn't mean like, like who to mind. This is a different story I just love like in this democratic way decided because it isn't working as a reality there. And yeah, you need structures to fight the fight. And you also need to understand, you know, this is not like the perfect society. It's like, it's a revolution ongoing so you also can't. Can't just give everything away and say, well, you are now free society without and then you have actually like war bands going around because like I do it. And then you have to have control because of that. It's not a really good idea of when you decide to have like an army kind of structure. political reasons to give them to to have. So this is an urban myth. Okay, don't fall. Don't fall for this. Lovely, lovely. So we have about 20 more minutes. I'm going to check if there are any questions around here on the chat to see what people have said. Nope. No, you don't have any questions. But I welcome questions from the audience if they want to ask anything. I have many questions. I think you have something else to say. Yeah, I have some ideas because the talk was actually like how to live and I think we did that we did that a lot and but there was also the part what to do. Very important. What to do is easy. This is for you. And so just, you know, they're like one big problems are actually. The problem with the experiences is like as a terror with which on the society is come from like drone strikes, which happening with drone strikes. Yeah, which happening, not just in Warsaw. It's actually a bigger problem. And so, actually the friends sometimes use to flat rather than funds which was also coming out of of hacking community. So like an idea, which is around, but the legs of knowledge and also is actually to implement a way of drawn, drawn 20, 20 for seven like like a page where you can see the movement of the drones. Oh, wow. And that would be actually a kind of really useful to us. It's not useful for us here because we don't have from strikes. But if you feel the terror, which this drone strikes have and this appearance of the drones along have on your everyday life. This is like really big need. And like, so like basically how to create a backend that allows you to detect drones that are flying. And yeah, so to some satellite or how would that work. Yeah, imagine our working. And detecting drones is actually not that hard because they're not so. So, so I actually think there would be also technical ways to detect drones with and like cheap recipe pie device kind of things. But the other way would be to give like people access to like, just say, Hey, here's the drone. And so like have like some channels where you have like trusted contacts with can. Yeah. And so that this kind of spread so then actually that had like two positive things that one positive thing is like people in the region could see you over there. The drone there's the drone. But the other thing is you could show the world how much drones are actually flying around and terrorizing the people because the one thing is the drone strike. But the other thing is that these things are constantly over your head. And you feel like okay. And this is like, I think this is like a project. If somebody feels feels to do something stuff. This is a project to go in. And the other thing I mentioned it already is like to go there. And like, experience this but also learn there and but bring your potential to bring your knowledge there and transfer the knowledge and this can be done. Like in many ways. I think this is like, yeah, and the other thing is do stuff here. Yeah. And really good examples is like to look at like there is a hacker like she's called Phineas Fisher. Look at look her up. Phineas Fisher Phineas Fisher. And she's done some great hacking and of banks and so on and and her last her last announcement is really good one and read this and another good example which also came out of the CCC actually the theater comics. And I think they don't exist anymore. But they have during the Arab Spring like in Egypt to that for an awesome Syria I think Syria they elated a lot of surveillance data and then Egypt I think they provided internet when the government So these are good examples for while like alternative media those infrastructures of resistance. And but also like see what was our responsibility here because you know we this resource transfer will not happen like fast. So, so we also do with our knowledge with our resources we need to do good stuff. Is there any hackers out there who would like to support this idea of building this map for the drone system. How can they get in touch who can they contact. This is a good idea I didn't talk about this. But they probably could get in contact with internationalist community. Facebook. And they're actually on Facebook. Also like they have like email addresses and so on. With BGP and everything. And yet on some way or another you will probably get an awesome contact with me because they also know me and so on. So like we say in contact. Yeah, I think that would be actually also with any other crazy ideas you can you can go. And then you can contact the internationalist because they also used to used to to like internationalist this is also another thing approach because for like other structures and watch over for example, it's a little bit hard to have like us with our mentality coming there and you know you want to move your mentality and more and go to the internationalist community know how to deal with you because you know you come there and you think you know everything. And this is a problem. Yeah, of course. And especially a problem if you would like and like this take kind of thing because then you think you know everything. You know how to do stuff and this action not working there like this. The reality is different. Yes, we always have to look at your reality. And to deciding people are the silent factor other people and not you know here you can go to like some government and argue with them and sometimes they will listen to you. If you are a hacker about this, there you have to go to the people and argue with them because they will tell you what to do. And you will not just kind of go there and say hey we do it like this. Yeah, of course. Me and my European mind is telling you this is the right way. They will really fast tell you that this is not the way. This is democracy brother. Yeah. There's another thing working. There's also lessons you need to learn there. Nice. So the teaming values to replica of the revolution especially understand is democracy, women's liberation and ecology. Yes. And actually, this and actually, like, like the woman's revolution as a youth structure is also independent in a way so the youth, the youngsters, the young because they are also like, it's just important to just mention that that that the youth has like an autonomous world also an autonomous world to play there. And yeah, but ecology, women's liberation and like these autonomous things. Yeah, this and this you also find there. I think it might even be possible to also find it here too. No, this is the dialectic way, because of for sure it's here also. It's also like how we are like kind of capitalist persons but within us there's also like the democratic side and we need also to find this but this is something where you get really help in Roger because they have like a lot of knowledge about this. And how to build this democratic confederation of, you know, yeah, but also like how to how to approach this, you know, this take me as this community of life. So this is all stuff you, you, it's really hard to build here. You need, you need these spaces which are liberated from capitalist to really go deeper over that because over here, like, I, from, for myself it was like being there for eight months. I made in like just my personal approach on how to fight how to live. I made, I think over here I would need five years to get there and there it was like okay in some months. I actually learned so much. So, so because of that you need these, these free free spaces. And this is like the value and because of that we need to protect also like this is also our fight. Yeah, because without these places. We will have a whole time. So, yeah, and this is also why this protection, because this is also a question I often get asked. You say, well, you, you, you fighting here and so what, what does Roger bring into us, for example, and so on. And I would always answer, well, if there would be Roger or places like Zapatistas or whatever, there would be hope. Roger was hoped for me and the friends, the comrades over there, this is like hope that brings me up also versus Zapatistas. You know, they're coming I think next year. Yeah, we're organizing them. To Europe and I think this is hope. You know, they, they're, this is like the beacons of hope in this, in this world, which, which is like in the total group of this capitalist minority. And so they are, because of that, we really need to fight. We need to fight. We need to weave all these revolutionary imaginaries of the world. So they become stronger together. Yes. And it really just go together. Yeah. And this is like the international perspective. You know, you, it doesn't really matter where you fight, but you need to connect with each other. And then with that force, we were overthrows us. But you just don't divide too much. No, we're beyond dividing. We need to go. Dividing with it like the last 100 years. Or 500 or 5000 years. Yeah. Do you have anything else you would like to share? No, actually. Yeah, well, no, I'm not doing this. Greetings to all of my friends. Greetings to everybody who sees this. And maybe there's a question. Somebody asked just now, actually, what are you organizing next year? That's a good question. But I think that's not a question to me. And maybe, maybe it was ready to the zapatistas. Ah, yeah, maybe this. I'm not that you can probably tell. But I think that the zapatistas, which is a, another revolutionary group in the south of Mexico in Chiapas, they are coming, not just to Europe actually with the other parts of the world. Yeah, to, to come and spread their knowledge and to share also with the, with the local experiences of resistance here. So watch out for that. I think you will be here around September. Maybe. Yeah. And many migrant groups and collectives around Berlin, we are also helping to organize there coming to Berlin because they're also going to lasagna, I think, in France and also many other parts made autonomous places all over Europe. So also watch out for that. And of course, if there is an opportunity to connect them to internationalist and to the Kurdish movement, that should also happen. I think that's already happened. It's already happened. Exactly. I'm not really sure I wrote something that the movement is connecting there. No, no, of course, yeah. This is all happening behind, not in the screen. No. Yeah. And then what, like my last word will be that I wish you all a really big success in your fights and your hate and whatever you do to destroy this capitalist modernity. And hopefully we see us somewhere on the way because it's a joint way. So yes, sir, captain, we say in the Kurdish world for success. Sir, captain. Sir, captain. And that's you always say when you leave somebody because you always wish your camera a lot of success. We wish you a lot of success too, brother. Yes. Thank you for everything. It's amazing. Nice to meet you, brother. So my name is Julio. This was Bear. I'm coming to you live from Berlin. Thank you so much. Thank you. Thank you. Thank you. Thank you.
Was ist die Rolle von Hacker*innen im Kampf und Entwicklung der Demokratische Moderne (was ist das überhaupt?). Welche Verantwortung haben wir im Herzen der Bestie? Was brauchen die wiederständischen Bewegungen Weltweit? Was können wir hier von ihnen lernen? Bassierend auf Erfahrungen die wir zusammen mit der Kurdischen Freiheitsbewegung in Rojava/Internationalen Commune gemacht haben. Wollen wir einen kurzen Input dazu geben und mit euch Diskutieren. Hacker of the Democratic Modernity. Who to live, What to do? What is the role of hackers in the struggle for democratic modernity (what is this anyway?). Whitch responsibilities do we have in the heart of the monster? What is needed in the fights of resistant movements worldwide? What can we learn from them? We will give an short input, Based on expedience with the krudish liberation movement in Rojava and the International Commune, and want to discuss this questions afterwards.
10.5446/52155 (DOI)
This is work that I've been implementing with a team from ITM, so from the Institute of Commedicine as well as people from the Brussels region who are basically involved in the local or the regional public health institute of Belgium. In this case Brussels, a particular thanks to Brecht, the first author of the paper and I won't be talking too long on this slide because we don't have that much time. So to just give some context, Belgium has been quite on top of the ranking in terms of cumulative confirmed COVID case as well as deaths. I think that's also something that came out of the work that Renault has been implementing and we won't be talking today about why Belgium has been ranking so high but this is basically just to kind of illustrate how it compares also to some of the mood countries in terms of confirmed cases as well as I've listed the US, China all the way, the bottom Finland, I include it, yeah because it's one of the mood countries and it's bungling somewhere below. But what we're going to talk about today is the Brussels region and I think of those people who've been to Brussels there are two things that everyone I suppose has been seeing and has been visiting, I mean it's not that much of a big of a city either. Over here you see by the way Manuka Pis from a few weeks ago or a few days ago actually, it was the one-year anniversary of the COVID pandemic which obviously was a reason for celebration. So just some numbers, so the population of the Brussels region is around 1.2 million and to date we've seen in the Brussels region around a bit more than 100,000 cumulative confirmed cases which would then come down to about 9% of the population having a reported infection with COVID. And why are we focusing on the Brussels region for this or have we been focusing for this particular analysis on the Brussels region? Well basically and one thing I'm already going to say in advance, so this whole talk it's a rather descriptive analysis, nothing fancy, I know within mood we've been doing a lot of really awesome modeling stuff published in nature, science, whatever. This is more kind of like a very well very it's quite a traditional epidemiological study trying to compare different datasets etc. So that's the note, the side note, but to continue now with why Brussels region as a focus. So basically to tell a bit of the history of the pandemic in Belgium, so the first wave was kind of over if you can say that and then around July so it was in the summer period we were kind of doing more stuff outside, going to listen to music with a bunch of people to get up, but then and then suddenly it was around July that in Antwerp we saw a brief search in cases and actually the local authorities of Antwerp followed up on this really well and very quickly and said okay we're going to do everything possible to curb this so basically curfew was being introduced limit of close contacts to five, there was a mandatory teleworking, a couple of things and that really helped and basically this limit to five contacts was also done on a federal level. Then what happened then was that around so the time that schools opened, so basically summer period was closed, finished schools opened and then the federal government decided to open up so they said let's just limit the number of contacts or let's just suspend the limit to the number of contacts and just to say what do we mean actually with this context that I'm talking about it's basically within Belgium they had this bubble concept so you're allowed to have people in your bubble and with those people you could do whatever, so you could be within one and a half meters, you could talk as long as you want, you could hug kiss whatever. Basically they said around the end of September they said okay we're going to spend this restriction to the number of contacts and it was actually at the time that the cases were still rising so it was only for a week because then everyone started to think this is not a good idea but it was in the Brussels region that this search and this immediate search in cases was very profound so this is the reason we're focusing on right now and in the stable you see all the measures that were put in place in the Brussels region so note this is not Brussels city only it's basically Brussels and the wider area so I've listed a bit the first three periods of where we can basically divide these different measures in already so that's basically up until when the the limits of context were suspended and then basically what was in general there was this period where there was quite a lot more restriction in the number of contacts so you're only allowed to have three contacts schools were still open but the bars were also closed and then this was followed up by further gradually increased restrictions so we had then around the 26th of October sorry 21st of October there was only one contact allowed restaurants were also closed a curfew was in place no indoor sports so it was basically further limited in a number of in physical contacts in place and then this was then there's a final period that we define basically and that is the period where the teleworking was also mandatory although the above was there shops were closed and this was the time that so that was the 31st of October that the awesome holiday school holiday was there but this was actually extended with the week so that's basically a bit of the context of the Brussels region and what did we do here in this particular analysis the idea was to see okay can we maybe by describing trends over time can we evaluate the effect of these measures and then this is basically the six distinct periods as well as school reopening by looking at the change in the reports number of contacts and in that we basically kept a bit the social contact hypothesis in mind where the idea is that I mean you see it also with all the social mixing surveys and all these data informing models right that there is this hypothesis that the number of infectious contacts is proportional to the total number of contacts so what you would expect with physical distancing measures is reducing this this this mobility of the population so the social mixing will have an effect on the level of transmission so what did we do we basically looked at three different types of trends I should actually say four because I also looked at the production number but that's basically that we looked at the proportion of reported cases by ACE group and basically a trend in that over time this to look a bit more into okay what's the role of teenagers I say some a bit more about the effect of school reopening we also looked at the context but context over time so the average number of contacts per person and then overall as well as by age group and a change in that for each of the intervention periods and we looked at a specific transmission events by age group oh that's what I said okay so what type of data did we use for that so basically what was also very unique and we today have been the only ones looking at this type of more detailed regional level data partly also because of privacy reasons but is we took on the one end the official statistics from CNSANO where basically the public health Institute of Belgium where you basically have the number of cases reported per age group and then on the other hand we took data from the contact tracing system and in Belgium this is uh dealt with by the regions so the three regions of Belgium's they are uh managing this phone and field agent based contact tracing system so every time when there is a case CNSANO notifies the regions and basically provides details on who should be contacted and around the study period so basically we took data from August when the search started to occur up until um uh half November uh at this time basically all the high-risk contacts were listed uh of the cases and referred for testing and who's a high-risk contact that there was basically physical everyone who had physical contact or non-physical contact but long enough 50 minutes um in within the one and a half meter distance and then those ones which were basically in these definitions two days before the the infection up until seven days after so basically the infection being the day of the test now what's important to list is that um in the the 27th the 7th sorry the 21st of October a change occurred um in this testing regime because the system was overwhelmed so they said okay only symptomatic cases uh needs to go for testing uh and throughout this whole period the people from 60 so the children to 66 years old as well as those ones at primary schools and the teachers primary schools who were symptomatic uh oh sorry uh only had to get tested if they were symptomatic so there's basically some some things to keep in mind with looking at this data so that that was basically the the contact the database that we looked at for looking at trends in context over time and then we also linked basically uh the contact and the case database databases together to see okay can we identify um transmission pairs um so basically who which ones of the cont of the cases were unknown contact and just to give you some numbers so in the the database we found a bit more than 50 000 cases that were referred for contact tracing less than half of those reported at least one contact um and of the ones that of the cases listed only 5% was a known contact so a bit more than 2000 transmission pairs so this brings us to the findings um so as I said right we look at different trends over time um so the first one we looked at okay during this different periods of of intervention which you see in color in the different colors listed where basically blue is the period where the schools opened and green is the periods where the context the limit to context was suspended you see basically when the schools opened that there was this increase in the average number of contacts listed one thing I should say is that there was a bit more randomness in the august period because the contact system wasn't very well in place around that time uh also a bit due to capacity capacity reasons during this summer period but a takeaway message basically from this slide is that um during the period when there was basically the suspension in effort in the amount of contacts allowed the the average number of contacts picked to about three per person and this is the reported number of contacts I mean whether this is the real number of contacts that's the second thing we get to later um and soon after basically the restriction to the number of contacts was introduced so if that was first uh as you might remember three close contacts you saw a decrease happening uh so that's basically during the yellow period um and it was it was already before the schools closed that you saw so there was uh in the in the orange period where there was this one contact allowed you saw a basically reduction from the peak period up until there to of around 36 percent in the average number of contacts listed um then the next thing we looked at okay so well during this time when the average number of contacts changed what did we see in terms of the instantaneous reproduction number so we used basically the FPSS package for that and we fitted it to the reported number of cases so not the uh time of symptom onset but the the time of reporting um and basically a couple of things that would be good to see here so blue again is when the uh schools opened right and you see there that there was actually an increase in the uh effective reproduction number uh the increase already happened a bit before but it was around that time that it started to go over one and it was actually during this period when the reproduction number was actually relatively high and it seems in general trend was still increasing that as the government started to decide it was a great idea to suspend the restriction in the number of contacts um so this was basically it was only four weeks later that oh sorry that finally we saw uh more restrictive measures in place and this was as I said before in a gradual manner and it was three weeks after this restriction in um the number of contacts as you saw basically the reduction in the reproduction number and this being also below one uh and it was already before schools closed that it was uh um there was again the orange period that you saw there was around 44% or 45% reduction to a level of 0.8 one thing that is good to note here and that's the big red line is that this was also during the period that the testing changed right which will affect um which will likely underestimate at least instantaneously the um reproduction number because you have less cases reported but uh it's good to to note here that the the decline in a reproduction number was was uh steady so the the client continued so when this this reporting changed you would expect a quick drop and then and then you still see this this decline occurring so basically that it seems that the the measures had an effect in curbing transmission but then a question to ask is okay uh when we saw this increase in the reproduction number uh was this also related then to the reopening of schools um so what we did for that was to see okay when we look at these general trends in in um efforts context over time do we see some differences between the age groups so again the light blue is the period when the schools opened right um and we see actually across the board so across all age groups there are divided in 10 age bands we saw this increase happening and also sustaining during the suspension of the the restriction in context um we did see overall that it turned to 90 years old had the highest number of contacts together with the 20 and 29 years old but not necessarily that their contact patterns were significantly further increased than the others what we did see is where there were some different dynamics happening was with the six and nine to nine years old mind you again these people these individuals were less frequently tested and also less frequently likely listed as a context so there's a lot of uncertainty and less of a clear trend uh for the plus 70 years old there was at the beginning quite a lot of uncertainty because there were just not that many cases also um in this uh in this group of individuals uh but overall they seem to have less contacts than the rest um one thing I want to say here is that the unfortunate thing of the contact tracing data is that we don't have a lot of individual a lot of context where the age is listed so we don't necessarily know with whom the different age groups uh mix which is the quite the nice thing we do have from the social mixing surveys there which are put in place quite in the couple of different places during this pandemic but that's good to know so the next thing we looked at to see to say a bit to yeah to kind of infer a bit more about the role of teenagers was to see okay when school's open so that's here the blue line did we actually see a significant change in the fraction of cases listed in this age group so we um we fitted basically uh segmented Poisson regression here and uh first thing before I go to that the result of that regression is what was what we note noted basically was that there was an increase indeed in the the fraction of 10 to 19 years old but that that increase actually already occurred before the school's opened but also that it actually correlated very strongly with the testing in this age group so I have another looking uh showing the the testing rates per age group but basically uh this this trend that you just increasing trends you see over here is largely explained actually by an increasing level of testing primarily within this particular age group um so basically then in terms of the regression analysis where we fitted basically a trend to the to the the the number of reported cases in the 10 to 19 years old we use an offset term which was the overall number of cases reported and the one important thing to list is the one with the red circle circles basically that we included a dummy variable which was zero before school opening and one thereafter and we fit in it fitted an interaction term between time as well as as school so both these variables account allowed for a step increase after school opening as well as a change in trend after school opening and actually this change in trend wasn't significant so the risk ratio of school opening was 1.2 well with quite a lot of uncertainty as you can see here so yeah so so one thing we found because because we basically hypothesized if if basically school opening and the cases occurring within this age group at such an effect you would expect first a significant increase in this age group and then it would spread on to the other age groups but the significant increase we didn't find evidence using these these methods at least so then the next thing we looked at the third kind of dynamics was to see okay do we see maybe that the 10 to 19 years old after school opening are also quite frequently the source of infections and the the important thing to note here again is caveat with regards to the data so during august as i mentioned before there was the the test and tracing was a bit less well in place so yeah whether these numbers should be taken by face values is a good question to ask but the other two panels so basically b and c is the time when schools were open and basically you see predominantly intra-generational transmission but actually quite some transmission between age generations also but if we look then at what fraction of the infections are caused by the 10 to 19 years old to the other eight classes it's about nine percent and it's more frequently the other way around so that other eight classes actually infects the nine 10 to 19 years old and then during actually the period when the holidays got extended so the awesome holidays you saw a reduction actually in transmission between the generations and more within the generation which is a diagonal line here but yeah the thing to note here is that we had to change in testing strategy also so you will actually that might maybe affect that the 10 to 19 years old are less frequently for example tested because they might represent less frequently with symptoms but overall it seems that the the the number of times that the the teenagers are causing infections is lower than vice versa so what did we conclude basically based on these these descriptives well first of all that operational data comes with a lot of caveats and limitations so it basically requires very careful description we try to visualize it in a in such a way that we could we could provide some kind of intuitive insights and also cross validate between different trends and hence try to say something in terms of to help inform public health what we concluded basically is that it seems that the second wave in Brussels and also the increase we saw in the level of transmission following the reopening of schools was largely actually a consequence of mobility across all eight groups and not necessarily across for example 10 to 19 years old but all of them we did find evidence for transmission in 10 to 19 years old and there has been also modeling which suggests that school closure can affect basically that dynamics but often it's it's it's not enough but anyways but we what we find there is indeed we do find evidence for transmission but it seems that yeah they're not the primary source of infections and that it's basically the the the dynamics between all the age groups which is important and increase mobility in all age groups which which has played a role and I think even and I think even more important to notice that it was during a time of school opening that more restrictive measures were put in place and were affecting dynamics between age groups that helped reduce the level of transmission and the reproduction number below once so that basically having such restrictive measures during school opening were sufficient and again there's quite some caveats with with the data I think it's great basically that we've tried to make use of operational data but there are a couple of things that are important to note and that is that we didn't have much details on the characteristics of the context right so that restricted us to for example make these nice social contact mates to see who is mainly in contact with whom the other thing and we're working on that now because there's now a bit more data on the whereabouts and it's also being linked all together but is that we didn't necessarily have information on where people had contacts and could use all this information together to basically look a bit more dynamically right as okay if if we do see transmission events between between cases do we see further onward transmission so what are the full transmission chains and how are they linked to different places and I think with the way the the the contact tracing system of Belgium works it is also a bit limited in terms of because it's only asking about two days before that yeah contact with then also the average number of contacts listed were quite limited and lower than also what was observed in social mixing data unfortunately social mixing data was only available up until August so not for the other periods that we analyzed here but we have about two and in this data sets it's around 3.5 that that we do I mean in the contact tracing it is at the high risk context only right so that could explain part of it but it could also be that yeah cases just don't list all their contacts but what is an important thing to notice that overall a specific differences that we find are similar with the social mixing data also there was just a low number of cases there was a known contact right so there's a lot of unknown unidentified transmission events happening which are not captured with the contact tracing system and I think the way we dealt with it by just looking at reporting trends over time and not necessarily individual come but just looking at trends over time and the a specific trends of overtime then the next thing I've mentioned a few times is that there is a shift in the testing strategy to excluding the asymptomatic contacts and this may have affected the estimate of the reproduction number but as I said there is a steady decline afterwards also so it seems that the measures indeed that we can infer that the measures were effective effective so in a mood context what are basically some implications well that we have we showed an example of how to use operational data but that it does come with challenges in access as well as usage so this system was definitely not set up for tailored outbreak analytics it's it was really for people knowing who they should be calling so it's a lot of different data sets cases are listed in a different they said the context the whereabouts in a dip so it's just not and it's and things changed the whole time so it's not easy to link all these data sets and to then have quite easy access to transmission chains for example it's a phone-based system so very anonymous to individuals list all their contacts in this system which is something that could be validated with other data sources looking at mobility data social mixing data these kind of things and there's actually a need for the improvement and also maybe for future outbreaks to basically have an idea in mind okay what is a minimal set of characteristics that we need for contact tracing to quickly have this idea of where are infections primarily occurring with whom who's contacted who's primarily in touch with them etc and the final thing I mean we were lucky to have access to this data but actually this change that this has changed there has been some some discussions about yeah what is operational research and what is scientific research and definitions here and when are people allowed and which institutes are allowed to have data we've had some experts also discussing about this that there's just so much delay in getting access to the people who can use it for advanced analyses that yeah there's not always enough room to to get insights quickly some acknowledgments which I also did at the start and I think it's no time to have questions this transmission matrix it's very interesting can you can you open it one more time can you share your screen yeah I get the feeling that it's almost like elderly people so like people in the 40s and 50s that they are transferring what I saw from the plot it looks like they are transferring infections to younger people is that correct yeah well it's if you see the one on the the A and B it looks like the like the ages you know 40 to 49 they they transmit to 10 to 19 yeah I think I think in terms of so basically I've been listening this where about files right where what is happening since I think it started in around November that these data became available or a bit earlier that individuals are asked okay where do you think you you got infected and they've been using these type of data to so not linking data sets but just looking at okay what fraction of people do what fraction of places are reported where they people think they have had their infection and there are basically three main places so it's basically the workplace schools and companies that are then frequently listed but also I suppose things like being in a bus or something it's a bit harder to to refer back to a recall but anyways for a long time it has been the workplace that has been top here it seems now that lately it has been more schools but again it's difficult to really define the exact around the frequency so that would that would be in line with this right as the people are yeah infecting each other at the workplace because this is the workforce actually with a 40 to 49 years old I had I had on this impression that actually like the I think I read some study and from France and and they pointed out that actually the student population like people from 20 to 29 they were kind of the main vector of spread and I don't see it in these plots that's why I ask you that I actually I almost see the other way around that the people in 40s and 50s that then transmit to younger people well you do see basically a lot of between the so among the students right so the 20 to 29 years old you do see quite some infections among them and there have been indeed some suggestions that apparently it's in Belgium very common to in during the weekdays live in your student city and then during the weekends you go back to your parents but yeah it seems that at least based on these plots that it's primarily within the the student group that the student population some further transmission outside but I think that's one of the limitations that of this study is that as I said before I we don't necessarily have this dynamic transmission chain where we have an insight in so what is this critical number of infections needed or fraction of infections needed to basically transmit other generations which then causes further on generation and further on transmission in other age groups if you see what I mean so to kind of really have this this modeling of the transmission chain that's not necessarily what you find here and this study was only done for Brussels this type of analysis and yeah just to is it the idea to test other cities in Belgium or yeah it certainly is it just turns out to be so difficult to get access to these type of data so I have a colleague actually who's trying to get this data then for the end for the to get to hold of these data for the for the Flanders region but there is just continuous lay in having to first show to the to our yeah confidentiality body what you're using the data for whether it's operational insight or whether it's more for scientific purposes if it's for scientific purposes it can wait and what is the definition there so the intention was there but it's just been delayed yeah okay contact tracing data we were actually we were approached by the Brussels region so by the public health authority of Brussels that was during the first wave there were very much in need of support in trying to prioritize their contact tracing so could we help out with identifying where cases primarily occur where transmission primarily occurs so that's how we basically started to look into the data and one other question they had so first question they wanted us to help out with was more timely cluster detection and secondly can we help out also with which measures have worked and which don't and how to yeah when and how to lift measures basically the system works in Belgium is federal on the federal level C&Sano curates the the case report data so the formal that all the tested individuals for SARS-CoV-2 that are formally reported officially and then then it's basically C&Sano who informs the three different regions on who should be contacted for contact tracing where we kept the social the social contact hypothesis in mind that okay the more contact you have the more likely you are to to cause effective transmission and and basically so we looked at trends over time and then eight specific trends in terms of these is average number of contacts listed and what we did not have information on is who people are have contact with so that's the was the unfortunate and the limitation in this contact tracing data that we had very limited information on the characteristics of the contacts about what we could do basically and that's also what we showed in the presentation is that we could look at if we look at transmission pairs which eight classes are most frequently the infectee so the causes of infection so when we go to the former trend so the number of average reported contacts we found that the 10 to 19 years old on average had the highest number of contacts across the board so across time and the 70 plus had the lowest number of infections and in terms of the highest this was followed by the 20 to 29 and then if we looked at the the bubble graphs so basically the transmission events between age classes we found actually that the 10 to 19 years old were not the ones who caused most frequently the number of infections so that could be that they are potentially less infectious so that they're less frequently causing infections or it could be that they're it could also relate and that's one of the limitations in the study is that the source case is not attributed to the right person right because we had quite a lot of unnoticed transmission events only five percent of the of the cases listed was a known context interesting for more than further research I think trying to evaluate different types of contact tracing system so very anonymous telephone system that we have in Belgium versus a more local approach where individuals in in people's own language are calling them or are visiting individuals as we've been also piloting in Antwerp how these compare and compare then then with datasets like for example social mixing surveys or Google mobility data can potentially I think help evaluating this contact tracing system and something which is of public health relevance and then I think another limitation for future research which we couldn't address unfortunately was that because the system was a bit limited in also quite a lot of unnoticed transmission as I've been mentioning before you don't necessarily have a good insight in the transmission chains so we have some infected and infectee pairs and some clusters which were a bit bigger than just two but you just know that there are a lot of clusters which we which we which we couldn't detect based on these data actually Brussels is higher the data scientist which has linked different data sources in a systemic matter so in a in an automa automatized and systematic way so cases contacts also the whereabouts so what cases list that they sit where they think they had a contact and as well as passenger locator forms all these things are now being linked and used in daily operations to see okay where are currently the active clusters so there's actually since the period that we did our analysis and up until now there's a much better insight in that and for now it's just being used okay where do we need to go where do we need to send our field agents etc but this is actually also very useful data set to see historically where did we find actually the biggest cluster
Esther van Kleef is a senior epidemiologist at the Institute of Tropical Medicine, Antwerp and holds a PhD in infectious disease epidemiology from the London School of Hygiene & Tropical Medicine. During her work experiences at national public health institutes (PHE, RIVM) and Oxford university, she has developed a keen interest in understanding the transmission-dynamics of pathogens and effectiveness of interventions, having largely focused on (modelling) antimicrobial resistance. Within the MOOD Project, Esther is working, together with MOOD partners, on identifying how to improve the integration of the threat of disease X in existing procedures of epidemic intelligence. She has supported analyses on ECDC mortality data related to different countries' Covid-19 measures in the context of MOOD's focus on early detection, preparedness, and monitoring of infectious diseases. She was recently selected among other scientists to join the Technical Advisory Group (TAG) created by WHO and UN DESA. The objective of the TAG is to obtain more accurate data on Covid-19 mortality cases, to help review WHO’s current methods, to come up with more reliable analytical methods, and to standardise the existing methods for surveillance data. Esther illustrates the effects of physical distancing and school reopening on cases reporting and age-specific SARS-CoV-2 transmission patterns, as discussed in the recently-published paper she co-authored (https://www.eurosurveillance.org/content/10.2807/1560-7917.ES.2021.26.7.2100065). For this study, the group of researchers employed operational data from the COVID-19 contact tracing system of the Brussels region and case reports made available via the Belgian institute for health, Sciensano.
10.5446/51814 (DOI)
Now your voices are all going to get so much louder. No, PA stands for Pathetic Audio. That's dark. That's record. Can hear us all okay? Yeah. All right, good. And no feedback, no crosstalk. That's all pretty good. It's almost like we know we're doing it. We're doing it on our first rodeo. One minute to spare. Who are we? I don't know. One? Hello. Do you know what we're doing? Yeah? We just like to put out a lot of stuff on the table. It's kind of fun. So we're going to record a.NET Rocks episode. How many of you have ever heard a.NET Rocks episode that was recorded in front of a live audience? Oh, why? Lots of hands. You know what your role is, right? Loud. What is it? Make noise. That's right. So what I'm going to do is I'm going to turn down this because I'm going to kind of scream actually. No, I won't do that. I'll just go like this. I'm going to say, hey Oslo, it's.NET Rocks. It's best I can. And I want you to scream and stand up and beat the person next to you. Take off your clothes and set fire to the building. Are you with me? Yes. That's what I want to hear. Please keep your clothes on. Nobody wants to see that. Sir, keep your clothes on. All right. Everybody's like, cool. What? All right. Here we go. We have pushed the red button. Okay. Here we go. Hey Oslo, it's.NET Rocks. Boy, Carl, you sound great. Well, my voice says we'll show up eventually. Maybe. Thank you for coming to the security panel at NDC Oslo 2016. Are you having a good show so far? Yes. Awesome. I appreciate the fact that you guys are so outgoing. Most Norwegians are very rowdy, as you know. So, Richard, buddy, how are you doing? I'm good. We're just about the end of this sprint of shows for us. It's been fun. We've done 12 shows here. This is the 11th one. Yep. One more to go. One more to go. So we're putting our way here. Yeah, no question. I'm very excited to be doing this security panel. Before we get talking to the panel and the guests, we have a little business to do. The first one is called Better Know a Framework. Roll that music. All right. What have we learned? You're actually seeing how the sausage is made. We don't actually hear the music when we're recording. You ever hear us say, wow, that music is really cool? That's a lie. We add the music in later. So we just sort of sit here for a moment, and then I'll segue. And this is, we call this an edit point. All our shows are edited. We have these amazing editors. They even make us sound smart. And so, but this will all be fixed. So when you go back and listen to the show, just remember this conversation, because it's never going to appear anywhere. All right. You ready? I'm almost ready, because what I have to do, and I'm using my phone where I should be using a PC, but I've got a URL that I am now copying and pasting with my finger. I have my finger on my iPhone, which is, as you know, the most fun part of computing. So I can get to this link that I've copied before. And you wait for it, wait for it. This is good. Troy's going to love this. In fact, he probably wrote the story, because it's a security thing. So it's probably his fault the first place. Probably his fault. So this is show 1326. Okay. Go ahead. All right, buddy. What do you got? All right. So this is show 1326. So if you go to, you know the pattern 1326.pwop.me will bring you to this story. U.S. spies are building software to spot your suspicious behavior in live video. Nice. Yeah, I know. And everyone of you guys is thinking, oh, you poor Americans. The intelligence community is working on amping up people recognition, power to spot in live videos, shooters and potential terrorists before they have a chance to attack. Part of the problem with current video surveillance techniques is the difficulty of recognizing objects and people simultaneously in real time. But deep intermodal video analytics or diva. Nice. Research project out of the, and I'm going to put this in here, quotes, office of the director of national intelligence will attempt to automatically detect suspicious activities with a help of live video pouring in through multiple camera feeds. I am no longer sunbathing in the nude. We're all better off, Carl, I think. And, you know, this just walks that fine line between security and privacy. I don't think it walks it at all. I think it's way over the line. Way over the line. There's the line. There they go. Sprinting over the line. So, you know, I always like to find a story when we talk to Troy or other security people that just elicits emotion around privacy and security. And that's a good one. Yeah, you're looking for the, they're doing what? Reaction. What? Okay. Well, anyway, that's what I got. Awesome dude. Who's talking to us, Richard? Grabbed a comment off of show 1295, the one we did with one Troy Hunt. Yeah. We talked about SQL injection and ransomware and all other kinds of good things. Troy doing his usual job of scaring the snot out of us. Right. And David Glass had this comment where he said, oh God, no, not Troy Hunt again. Every time he's on.NET Rocks, he makes me panic and change all my passwords. And my pants. Seriously though, I am stunned every time I see a seasoned developer show a complete lack of interest in making their application secure. I can understand how new devs don't get it, but it still winds me up. They seem to fit into one of these three camps. One, oh, I just never thought about it. It's baffling that this still happens today. Two, it will never happen to me. I just show them the web locks of any app that they've worked on and the constant stream of port scans and login attempts usually bucks them up. Or three, I just don't care. The worst. Appacy is the hardest thing to fix. Why isn't security not just taught at level one, but made an integrated part of the process? It's like giving a class on web development and not mentioning CSS, which in some ways is a kindness, but okay. I agree. Every student developer should be given more trunk talk. Have you ever wondered how I can read every one of these emails perfectly? Let me tell you the truth. It's my brother Jay who edits the show. He's going to fix it for me. Or is it Jay? We love you, Jay. Is it Jay? No, it's a Thursday show. It's Brandon. Brandon. Brandon fix it. No, it's a Tuesday show. Is it a Tuesday show? Yeah, this is a Lawrence show. We have three editors. Sometimes we need all three. To make us sound smart. They have a console, they have a dumb fader and they turn it down. It's like the brightness knob, only it works. All right. I've got to fix it. You can't laugh. We're going to edit this in. So, you know, it's going to be a quiet moment. Every student developer should be given some Troy Hunt talks to listen to. I have nothing to add to this. I think that's absolutely true. I agree. And definitely a problem. David, thank you so much for your comment. A.NET Rocksmug is on its way to you. It's like a.NET Rocksmug. Write a comment on the website at.NET Rocks.com or via any of our social media. We publish every show to Google Plus and Facebook. And if you comment there and we read it on the show, we'll send you a mug. And definitely follow us on Twitter. He's at Rich Campbell. I'm at Karl Franklin. Send us a tweet. We print them out and post them on the walls of the Department of National Intelligence. All right. Well, I'm going to let our guests Troy, Steven and Nile introduce themselves starting with you, Mr. Hunt. Yeah, I'm Troy Hunt, the guy in the thing just before. You were that guy. Apparently the scary one, the Australian security guy out of our colonial collection here. It is kind of a colonial mob up here. Sort of, yeah. Except for you, Canadian. Mr. Hunt is actually Canadian too. Oh, okay. Steven? Hi, I'm Steven Haunts and I'm not Canadian. Oh. I'm from the UK. I'm a lead developer for a company called Bang Butler. And I also do some work for Pluralsight with Troy. Great. Cool. Not as scary as Troy, though. Now, my name is Nile Merrigan. I work with CapChemini. I'm one of our local Irish-Norwegian imports here. I kind of came up here and got lost and they won't let me go back home. I love you, honey. If you've never heard this, we can edit that bit out, right? Sure we will. So their actual bios are on the website.narox.com. If you really find out how they're qualified to be here in the first place. So a security panel. Where do we start? What do you think is what's going on in the United States? And I'm the only guy from the United States and I'm asking you guys, what's wrong with my country, actually? Do we have to answer that politically correct or? Of course not. Shall we start with the election and just go from there? I'm sure Trump's got what you're best interested in. You should try living next door to the guy. I know I'm always lobbing scud missiles over the border. Build a wall. I mean, the breach culture, I think people are getting numb. They're just not even reacting to it anymore. It's become funny. I think it's partly that. I mean, we're recording this at a time where the last few weeks we've had things like my space, 360 million records. Yeah, new record for you. Yeah, I passed a billion records on the stage. It's been kind of believe it or not. And just time out for those who don't know what he's talking about. Troy has a database of email addresses that you can look yourself up and see if you've been hacked. It's called have I been poined? P.W. It's got all your email addresses in it. Probably does. I actually found mine in that from the Adobe hack and I had to go change my password because of it. So over a billion, you said? Yeah, so we passed a billion yesterday because it loaded VK, the Russian version of Facebook, which was about another 93 million. But that came after a day. No, what was it recently? MySpace, LinkedIn, Tumblr was there, fling.com. Everyone's going, what? So fling.com, look it up after you get home. Don't do it here. We had all these massive data breaches and the interesting thing is they're all from several years ago. They're like 2012, 2013. But they're just surfacing now and what we're seeing is this sort of media buzz where everyone wants to believe that everything is a data breach. So the news yesterday was it's Twitter. Twitter's got 32 million accounts hacked and that's all the headlines. And then the chief security officer at Twitter's come out and said, no, it's not ours. It didn't come from here. So now it's almost like it's not just getting so used to breaches. It's just automatically assuming the worst and everyone's just losing their minds over, not even checking that they're actually legit. You often fall into a situation where you're the validation source now. Yeah, well, I actually check stuff. So there's that. But a lot of people don't. And I guess that there are also parties out there that are sort of incentivized if you like by the fact they're breaches. There are people that sell the data, there are people that sell security services that benefit from other people thinking that data is out there. So there's too many sort of vested interests in wanting there to be large data breaches and it's not in their interest to check them. Interesting. Wow. Now it's just evil all by itself. Oh, it's another level. Yeah. Well, and this is another, this is an odd aspect of dealing with security just that you sort of got into the situation with this bloody website of being, you know, on top of every breach. Have you guys been responsible for systems that have been breached? Like you've been on the other side of this? Will you admit it? Luckily not, but I've worked for plenty of companies that have made some pretty dumb decisions. Right. And they've had some pretty poor excuses for why they don't improve their security. But haven't been punished for it with a nice public, here is all of these user name emails, credit card numbers and so forth. And we blame Stephen Hart. Not yet. I mean, there's one company I've worked for whose name I'm not going to mention, which I wouldn't be surprised if something did happen in the future. Like it's sort of inevitable. Yeah. And what's interesting about the time delayed breaches is just this idea that they may have been hacked. But right now, somebody's sitting on that data maybe trying to sell it without making it public because it's worth more while it's unknown. Just because it's not like somebody leads a card behind saying, hey, I copied all your data. And is there any evidence that data that's stolen, whether it's accounts or, you know, banking statements, that are they used more for collateral, you know, or prestige or whatever? Or do people actually buy them and then hack against those accounts and benefit, you know, actually commit other crimes with the data? Do you see both things happening as much? I certainly see both things. One aspect of this I find really interesting is that there are a lot of people that trade in data breaches the way you would trade in like baseball cards, right? And a lot of the time it is actually kids. It's like legally children, you know, and maybe they're 15, 16, 17 years old, but they're kids. And they're going, hey, I've got this one. You know, do you have that one? Can we do a swap? Can we do a trade? And I'm sort of looking at it going, well, what are you, why? Like, what are you doing with this stuff? Yeah. And, you know, they want to do stuff like look up friends, though, you know, some of them want to sort of say how many passwords they can crack. But it's bragging rights too. I mean, you're a kid and you say to your friend on the bus, you know, hey, I've got all of LinkedIn's databases, passwords and all that stuff, you know, ooh, you know. No, yeah, there's a degree. You have like a certain medal of honor among your script kitty friends. But, you know, there's that and I do. Hang on a second. You're colloquially speaking. Yeah. So there's that. But the other side of it as well is that there is a commercial upside to having breached data with accounts that actually work in other places. So we've seen just in the wake of this news the last couple of days about there being some large amount of data that works with some number of Twitter accounts. A bunch of people have said, hey, my account, my Twitter account has been broken into and there are people posting things like porn networks. And inevitably there is some degree of monetization there where it drives traffic and awareness of the sites. And there's definitely really sort of shady underbelly to that, which does actually have a commercial incentive as well. Like with all the different breaches, like if you've got access to all these kind of passwords and using passwords and you can start seeing all the hashes, you can start to draw conclusions of the type of security they're using and then start kind of maybe social engineering, especially if you can find one of the high profile accounts that they're not using to FAA. If you can get near a kind of CIO, CSO, or even like financial officer and you can use that then for advanced social engineering techniques. Social engineering meaning blackmail and such. That's exactly it. Everyone thinks, oh, you know, social engineering, someone's going to click a link fishing from some Nigerian prince. But there's other things, for example, they'll say, well, we'll try and insert something into your computer and track out like some of your personal information, maybe try and activate your webcam at an inappropriate moment or put some data on your computer that we can use then to blackmail you to get you to give us money. Because humans are the weakest part of every system. It's usually where we do get the biggest and easiest breaches out of people or out of any system. We just try and find a user with a weak password or we try and strong arm and it means somewhere and then take over. Because there's a lead time of like 200 days between a system being breached and it being found by the kind of security team if they don't have a proper kind of intrusion detection system in place. Well, I was just checking my facts here. A few days ago, Zuckerberg had his Twitter Pinterest and LinkedIn accounts all hacked. Allegedly, he had a password of da, da, da. Allegedly. But we do know that he did have those three accounts hacked because we saw other people take over them and tweet and message on his behalf. So talking about high profile individuals that are the targets of these sorts of things. Because it wasn't that with the thing with the LinkedIn one. Once the LinkedIn breach kind of went public, the latest one like from four years ago, they started looking for high profile accounts and then started kind of posting. It wasn't they were posting against LinkedIn passwords. What they were doing is they found the password for the person used for LinkedIn was the same one they were using for Twitter. And then they had enabled cross posting from Twitter to LinkedIn. So people were then kind of saying, oh, you know, there's there's nice LinkedIn, you know, your professional network. And all of a sudden it comes up this porn URL from some CIO in like some company. The guy's got, how did it get there? Did they hack LinkedIn? It's like, no, they hacked your Twitter because you didn't because LinkedIn had forced everyone to reset all their passwords if you were involved in that breach. So public service announcement, don't allow cross posting from Twitter to LinkedIn, especially if you use the same password. Maybe just saying is the same password. Oh, here's the other thing, like all of these accounts, they haven't got multi step verification turned on, right? Like as soon as you have a reused password or a bad password, that is your your full back position. That's your defense. You know, you're going to have the little SMS or the authenticator app. So in the cases, I could be here and those other ones, obviously they just didn't enable that. And it's there in all of these big social media accounts now. So every time you see one of those owned, you sort of go, you missed something really fundamental. Well, like how many people like don't use 2FA on everything? 99%. There was a figure the other day. Is that fact checked? Yeah, no, that is fact checked. I can't remember whether it was from LinkedIn or it was one of the other big ones just recently. It wasn't my space because no one cares about that. It was one of the big ones. And they said literally their statistics showed less than 1% of people actually enable multi step verification. Well, and part of that would be that there's an awful lot of bad 2FA implementations out there. Like it cripples using the product. Yeah. So even if you don't turn on 2FA, I mean, I arrive in Norway, I go to log into Twitter and immediately get an email for Twitter going, hey, are you in Norway? Which is not bad, right? I mean, at least that's a useful thing. But there was an interesting situation that happened the other day, Richard. You went to PayPal, you logged in and he knows that he's from Canada and yet the PayPal page after he logged in was in Norwegian. Including the button to switch it back to English. That's 2FA right there. It just stops you using PayPal. Also, what about things like password managers? Well, I was going to say, should we even be using passwords at all now that we have password managers and things? We should be remembering them now. I mean, you don't need to remember most of them now. But look, we've still got to use them. It's just a question of not getting an emotional attachment to being able to remember it. Well, that's where it breaks down. A password manager like the one Richard uses, you don't actually know what your password is. Yeah. So I was laughing about, right? So I was logging into PayPal. I have no idea what my PayPal password is. It changes itself every 30 days, right? Like, GlassPass does that for me. I don't even, beats me. I don't know. It would be a pity if somebody found your last password. That would be a problem. And that's sort of the issue when you talk about these kinds of tools. Now, admittedly, my last password is all about the entropy. I'm pretty sure it's in the 8th character range. Literally, all about the 8th. Well done. I think it's a good idea to study some obscure poetry, like some Icelandic Vedas or something. And then just take a poem, memorize it, spend the time to memorize it, and you've got like five lines. Now, you've got five passwords that you can remember just by number, one, two, three, four, and five. So you can create yourself a little document somewhere that says, oh, this site is one, this site is two. She's still getting into the problem of... When it's a little surreal, my mom, she rings me up and she goes, Nile, I'm going to get a password book for the house. At that point, I hung up. Where's the house of a password? She was, because it was going to be the shared passwords for my mom, my dad, and my brother. And people who came to visit. And I was like, I was not too sure if my mom's trolling me as the best social engineer in the house. I'm sorry, you know, take off your shoes, write your password, it's fine. Would you like a cup of tea? What's your password? But she sends me this and I'm like, I have to hang up. And she goes, why? I said, because right now I just need to scream out a window for a little bit. And then I'll talk to you again. And I said, it's okay, I just got you a thing, it was one pass I picked up for her. And I said, here's a subscription to that. Just go nuts, use that. She goes, all right. And at that point, it was like introducing her to that she was, she's now got so used to it that she's now kind of going, I don't know why my password is nor do I care. And it's, but it's that education part that we're missing that why should I need to know any passwords at all anymore? Well, and the big thing here is like, okay, great, you've memorized a set of pass phrases. And then you log into Microsoft Live ID, what's between eight and 16 characters? So it's like inherently a crappy password, no matter what you do. But a good password manager will at least let you set that to something that only is going to affect that. That are made in fairness as well. Okay, 16, it's crappy and we should do another panel with Barry and beat him up about it. I love it. However, it's 16 random characters like genuinely random characters that aren't getting cracked. The amount of entropy you can get out of 16 genuinely not too bad, right? But you know, longer is better. Yeah, the XKCD for cartoon is correct. But it's degrees, right? So like how long should it be? I mean, maybe, maybe that's not because you like password managers. So what's the right length? As long as... No, no, no, exactly. Oh, 42. Alright, I'm 43. You know what I mean? Like it's the math, you can't lie with the mathematics, but once you get genuine randomness, the length doesn't have to be too much in order for the strength of it to be pretty off the charts. What I don't understand is sites where you're supposed to pick a password and then they have rules, right? And the rules are, can't use special characters. Can't use numbers. Can't use the word select. Only uppercase and lowercase letters. In other words, they're restricting the strength of your password for what purpose? I just don't understand. Business rules. Why? Because, you know, bytes cost money. Yeah. Because they're passing it as a query string and they don't want tokenization to happen. And of course, that's the funny thing for those who haven't maybe thought through it, that the bytes cost money argument. Once you hash it, it's all a constant length anyway. So it doesn't matter how long the input is. The length sort of goes away. Yeah, and the cost of getting breached is a little bit more than the cost of the bytes in the first place. But like, that's... Do you kind of bring that up now when we've got a... It was a new European directive involved in that called GDPR and data protection. And that like has, if you get breached, it's a minimum of 20,000 euro all the way up to 4% of your gross income. Yeah, I'll buy a few special characters. Yeah. But like, you know, we were talking about 2FA and the whole way it breaks systems and for people to use. The whole UX, security UX concept, I think is a bit broken at times. You know, how we get this username and password box, it doesn't tell you what we expect you to put in before you start. So you start first off putting in a big password and then all of a sudden it says, no, you have to have something different. We need to have this, we need to have that. It becomes a nightmare. I think that the UX guys have been... Are now needing to kind of come up to the stage as well with the kind of the security part of it and say, well, how do you put this together? How do you make it simpler for users to understand and our users of our systems to say, pick something good, let's guide you through it. Like we saw that, you know, the little bar, it goes from orange or red all the way to green on your password strings. They're kind of a common thing now. Mm-hmm. So I got a question for Richard actually. LastPass is the one they use? Yeah, I like LastPass. Yeah. So LastPass, you have one master password and then this thing controls and gets into all of your accounts. Yep. Is there any time that you wish you hadn't used it, like have you ever been on your phone and not been able to log into something because LastPass didn't work on your phone or something? Last... So the only one thing you look for in a password manager is, is it on all of your devices? Right. And LastPass is pretty good about being on all of those devices, but I'm still using a Win10 phone because deep down I hate myself. And the LastPass client is quite crappy, right? Like because nobody's working on it because who's got that phone, right? And so... You want to buy an iPhone? Nice. There is a clumsiness now when I need to log into something, and I've absolutely been in this situation where I have to go to the LastPass app and there is a mechanism for copy password, although you never see the password, and then you have to flip back and paste it into the thing you want to use it. So it is clumped where on Android it's seamless, right? It is as it is on any PC where I'm in Chrome, I don't use Edge because no plugins, right? So but in Chrome with when I'm properly logged into LastPass, when I, as soon as a username and password appears, it's just filled in. Yeah. But it's, you know, and I never see the password, it just happens. Like this, the friction goes away. The other thing that that though, and I'm not going to say LastPass is the perfect tool, like it's the one that makes me happy. There's a bunch of them. And there's free ones as well. If you want to do the care and feeding for them is the cleaning up your own mess. So, you know, LastPass every so often reminds me, hey, you still have a couple of old passwords that are the same on some old accounts. Can we go fix those? And does it actually go to Amazon, PayPal, whatever and change your passwords for you? Some sites have set up a service now so that things like LastPass will literally change your passwords for you. So, you don't have to do anything. God, I hope that doesn't get hacked. Yeah. You're totally right. And it's like, but again, you get back to why do we use the cloud? Because in theory, the public cloud providers have the best people keeping that infrastructure running. These password services have really extraordinarily talented security people working there. There's only so many of those to go around. I trust them more and I trust myself to remember my Vedic poem number. The same way we feel about running our own servers. You know, we think about the battles we've had keeping the.NET rock site running in the old days when we were literally running on our own hardware. And now that it's in the cloud, while it's still not perfect, it's better. Much better. So, I don't think the password managers are perfect, but without a doubt, better. Okay. There you go. Last pass. Good solution. And only one of them. Pick a product. The only thing you ever, the one thing true in all of these things is you must spend a little time learning this and getting used to it. And how often do you change your last password? Relatively rarely. Because it's long and it's sufficiently entropic and it's just not a big need to change that password all the time. Cool. In the end, we're talking about data breaches, right? The issue here is the most common passwords are out in the wild now, right? The big thing that comes from a data breach is those passwords are now exposed to the world and so you really don't want to use them again. Yeah. And of course everybody does. I have, I'm not going to tell you how many, but I have several passwords that I rotate and every once in a while I take one out of rotation and I add a new one because I can only remember so much, right? And that's what I think most people do. I don't even think most people do that. I think most people have one password that's probably 10 characters that they use everywhere. Except for places that aren't allowed to use letters. Right. And then the ones that doesn't like the number and yeah, it's the consistency of all the sites that drives me nuts. You know, a number of times you go there and it says, no, you can't use this. Or even the length, right? Like how come the minimum length is always different? And the funny thing is like I do a lot of workshops and I say to companies, you know, what's the right minimum length? And there's a really funny pattern. Everyone always says 6, 8, 10. It's always an even number. Has anyone got a minimum length that's an odd number? Seven. However, brute force retries always an odd number. Three, five. Right. I don't know what. Just an observation. You're only allowed to retry three times and then we'll lock you out. What a four. I have a certain number of accounts in my life that are perpetually locked out. Right? I mess up my American Express account almost every time, one way or another, and I'm locked out of it all the time. And I just don't care enough about it. I've been locked out on my telco for years. And I have a tough time caring. I just can't pick up my voicemail. I do not care. And the one thing I always find irritating about password policies, especially core-foot policies, are the ones that make you change your password every time you're in a company. Force rotation. Yeah. Just that in one. That's exactly it. You know, you can sell it all along, you can be in a company. Does it give you any extra security? Yeah. No, yeah, because you're right. You just add a number to the end. I had a credit card that required an additional password when you went to use it online and it would kick into their own little iframe thing that you had to enter in. And there was no password recovery. So when you couldn't remember password because they had their own goofball rules that didn't fit with any passwords, so before I was using the password manager, you had to change the password. So you go to change the password and tell me I already used that password. So you can't recover the password, but you can tell me not to use it again. That's awesome. Hey, Richard. Yeah, buddy. Guess what time it is now. I must be that happy time again. Yeah, it's time to change my password, too. It's all about the entropy. You like that, huh? That's a good one. It's actually time to give away a Syncfusion Essential Studio to one lucky member of the.NET Rocks fan club. With over 650 controls, Syncfusion's Essential Studio is the most comprehensive suite of components available for.NET and JavaScript and Xamarin. Yeah. With world-class diagrams, maps, and charts. Reduce your development time, save some money, and get the best support in the industry. These are just a few of the reasons over 800,000 people make Syncfusion a part of their daily dev process. And now individual developers, excuse me. Not to meta point you. I feel like Lou Costello here. Yeah. And now individual developers and small teams. Oh, shit. Don't get sick, people. It really sucks. And now individual developers and small teams can get access to every single control in Syncfusion's library for free. For free. The community license also gives you access to Syncfusion's growing library of enterprise applications like Dashboard Platform and Big Data Platform that can help make sense of complex data. Support and updates are included, too. It's a 10K value for free. Check it out at Syncfusion.com. All right, buddy. Who's our winner? Today's winner is Craig Lecter. Woo! No golf class. Craig, you must feel like the luckiest guy in the world right now. I hope you're not listening with your friends at lunch. And Craig just won the Syncfusion Essential Studio. It's a big pile of awesome from our friends over there. If you don't know what we're doing here, go to.NETROX.COM, click on the big Get Free Stuff button, enter... No, Get Free Stuff button, answer a few questions and join the.NETROX fan club. We have thousands of members all over the world and every show we like to give away stuff from our sponsors. And every December we give away a $5,000 technology shopping spree to one lucky member of the.NETROX fan club picked at random. But you've got to sign up to win. We ask our guests in every show, if you had $5,000 to spend on technology today, Troy Hunt, what would you buy? Does everyone see those, like, unmanned missiles that were out there? How much do they go for? Oh, one of them. A little more than five grand. Can we pool our resources? Yeah, we all get together. Are you going to ride it? I don't know. I'm still deciding because they look kind of cool. I mean, I just have the question, like, what would you do with a missile exactly? What wouldn't you do? Whatever it is, it's only going to be once. There might be this new president. Oh. All right. Well done, sir. Stephen Hunt, what would you do with $5,000? I mean, I'm quite good gadget-wise at the minute, so I don't really need any more tech in mind. You're just talking crazy talk now. Missile. Missile. So what I was going to say is, you know, I've been working quite hard recently. I could do it with a holiday with the kids. So maybe it's just go to Disneyland. Awesome answer. But failing that, I'll buy a missile with Troy. The General Atomic's MQ1 Predator. That's a drone. You're going all out now. But it can carry missiles. It's the missile carrier. I'm worried as Troy's been sitting here Googling missiles for the last few months. I'll tell you why the NSE. How do you get into the US anyway? So how will I get in now? So now, I hear there's a bottle of Scotch that costs about $5,000. Are you interested in that or something else? Do you know if it was like an IoT bottle of Scotch maybe? But would that count? I would like one of the Holler Lenses actually. Then you could play with digital missiles. I think we just went on the missile thing all over the place. So you have $3,000, so there's $2,000 left from there. So there's $12,000. Will that buy a missile? Because now you have a missile and you can build a launcher. And it's like he's checking the retail price. Yeah, they're $2.38 billion. For one missile. Oh, no, that's the price. No, it's only $4 million for a unit. Bargain! We're closer. He's probably giving away $4 million. If we put a Dianichon bucket at the next level. I can't think of a $5,000 Scotch. No, there's a $15,000. Yeah, the McCallan reflection, we saw that when we were up there in January, was $15,000. What does the Shackleton stuff go for? It was only a few hundred bucks. Oh, really? That's $200,000, simply by it. But that's the second edition. You get the first edition, it's a bit more expensive. Yeah, that was tricky. It involves an aortic mission. And it has to be from 1912. Yeah. Yeah, these are not normal problems. We found this, when we were up in Scotland, we went to this bar and they had reflection by the shot. I think it was $375 a shot. And I didn't try one. Because there were no good outcomes. I'm either going to like this, then I bought a $15,000 Scotch and I'm in big trouble. Or I didn't like it and it blew $375, something I don't want. That's almost expensive as a pint of beer in Norway. Oh, no. As I found some horror the other day. Yeah, absolutely true. Okay, there you go. I quickly whipped out the Masters of Malt just to focus in what's actually important, which is expensive Scotch. They had 24,000 pounds for a bottle of Balveny. Compendium Clutch and it's at least five bottles of Balveny. Sorry, sorry. Well, that's a deal. We just got off the deep end. But getting more in our range. How much is a pound these days? Anyways, is it two to one? $1.5 to one? $1.8. Somewhere in that neighborhood. Yeah, Glenfark was $62,000, 2800 pounds. That'll lead it up. All right. So Nile's still not interested. Maybe a hollow lens in a $1,000 bottle of what was the one that was the, you got for Kent, that was a grand. Oh, there's a Glenfark plus 40. Nice. That was very nice. Yeah, you'll buy a couple of those. Yeah, I was burping oak for an hour. Welcome to Scotch Rocks. Not escalated quickly. I don't want to talk about passwords anymore. I'm sad. I really want to talk about developers thinking about security, like some actions they've written software already. And they're working on the next round of work items for the next sprint. How are we starting to talk about at least incorporating more security into software so that we're no, it's not our app that was breached. I think for, you know, yeah. All right. You can go first. So speaking particularly from Stevens and my vested interests as PluralSight authors as well, education for developers is really, really cheap. In terms of where you can spend your security money on, there is a lot you can spend on products you can get security in a box and they're big boxes with lots of blinking lights and thousands and thousands of dollars. And they sort of do one little thing for one particular app or one particular company. But you educate people and they get to reapply that over and over and over again. And they also get to apply it at the time where it's the cheapest to fix security, which is when they're writing it. Because we know that for any bugs in software, whether it's security or business features or performance, the worst possible time to have to make a change is when the thing's all live and it's already out there. It's that sort of exponential cost thing. So, yeah, for me, just getting these folks to sit there and even just, you know, whether it's go through our courses or do some training or something like that just to sort of skill up a little bit. And it has a fundamental impact for a very little amount of money. Well, one fundamental thing everybody can do is use HTTPS everywhere. I mean, that helps us a lot, doesn't it? Because we really can't trust our routers and link. You can't trust the Wi-Fi. I don't trust Wi-Fi. I don't trust Wi-Fi. He's carrying his pineapple with him. That's the kind of guy he is. But one of the things like this is security culture is like building this into your team and getting it together so that people are starting to think, you know, I as a developer have a responsibility for this data that if it ever gets out and gets, it could ruin something. Sure. Now, if... You work for Ashley Madison? No. Okay. If I did, yeah. Yeah. We wouldn't know. Yeah. The thing, all jokes aside, like I keep making this kind of point, don't make my job any easier. Because it's getting to the point where it's getting too simple to do a lot of the hacks anymore because the researchers and we're all just looking for one mistake you're going to make and you're trying to build your security boundary around your applications and we just go, ah, there's a little hole you forgot and that's it. So think about what happens if when this gets broken into and someone steals all your data. What would happen? So if you assume that, okay, we've got hashing on all our sense of information, we've got hashing across the entire database, we've got like data level encryption, we've got all this other kind of different techniques available, that's great. And then look at going, well, I've done that so I can... If someone breaks in, they can't get anything. And then what happens if, how do we stop people getting in? That's kind of the two parts. It's not just like, let's build a huge wall around everything. That doesn't solve anything for Americans. That's what you think, that's what you think. Believe me, believe me. But I talked to the Pauli Genowiskes of the world and she's like, I'm going to get in. That's what happens next. No, she is exceptional of all the most terrifying... Do not let that woman touch your computer. It's a mistake. Ask me how I know. There's a great story about her. You want to tell it? Oh, you guys know this story? Yeah, I know. She's a petite, blonde, Polish girl with an English accent, because she learned English from an English... Very soft-spoken, very sweet....goes into an office that's going to hire her for pen testing half hour early and asks the receptionist, she can get online to pick up some notes because she's really nervous about the meeting. And then by the time she gets to the meeting 30 minutes later, she has every administrator password already. So she says, let us begin this conversation with all of your passwords. It was a kind of a question of why should we hire you? Yes. Any questions? My question is why shouldn't you? You don't need to hire me, you're done. Yeah. So, but I just sort of acknowledged this idea that... I'm not going to acknowledge that. You good? You sure? Yeah. Happens. But I just sort of acknowledged this idea that penetration is going to happen. It's just really... And I put my IT hat on here. We talk about security in depth because it sounds good, but the reality is there's no wallets unbreachable, and it's just how far are they going to go after that. Data needs to be encrypted on the disk, right? So that even if it's taken, it's like, good luck, you don't have the keys. Keys should not be sitting in a text file marked keys. That's your other problem. Well, Troy demonstrated in one of his talks earlier on about like a hashcat and how quickly he could do on commercial grade hardware. Not even like, you know, kind of industrial stuff. It's just commercial. Yeah. It's just consumer grade hardware. Yeah. That you can just like a straight on GPU and how quickly you can crack passwords based off just random brute force. And it's astonishing. There's now companies specializing in supplying you with a box of 10 graphics cards that you can go off and just crack 350 billion hashes a second. And, you know, that's how quickly it'll get through certain types of hashes. But what about throttling password attempts? You know, only allowing a certain number per second. Well, that's why you should do something like a password based key derivation function. What? I mean, what you're talking about, Kyle, is like in the app itself, right? Yes. You can only make a few HTTP request, HTTPS request. You can only make so many per second or whatever it may be. But I guess in terms of password hashing, it's a question of once the password storage has been compromised and someone's SQL injector sucked out all the passwords and you've got the hashes, you've got no more app to do any throttling. And so to Steven's point, now we're talking about things like PBKDF2 where you can effectively increase the workload of how difficult it is to create the hash so that you can slow the whole process down. But if rather than being able to do, you know, 4 billion MD5 calculations a second, you can only do 4,000 B-crypt calculations a second, well, you've just increased your password strength a thousand times over. Right. And made it just a little less interesting for anybody trying to break through. You know, if they actually wanted to work, there wouldn't be data thieves. Well, the thing is, if it takes you, say, 20 seconds to crack a password. Right. But if it takes you 20 years. Yeah. Well, that's the level of entropy because of the fact that they've got, they've done a correct hash and it's sorted correctly and everything's done right. Until they get a faster computer than you. Yeah, but that's the thing. You're kind of, you're up against the, well, we can crack more passwords with my iPhone than I can crack with my old computer. But I gotta tell you, everything we've talked about, so one more time. I gotta tell you that it's only when I talk. Why is that? It's like, honey, you keep interrupting me. But everything we've talked about so far speaks to IT's responsibility security more than developments. Encryption on the desk, password policy, like this is stuff for the ops guys. I don't know, as a dev, need to worry about this. The devs are building the software which chooses the encryption or rather the hashing function. I mean, if, fragrance sake, you go out and use the ASP.NET membership provider from 2012, you have chosen a product which is now going to hash with the SHA-1 and assault. And it's going to be pretty much useless. I mean, it's you as the devs who are going to choose how they get stored on the disk. IT is choosing the Active Directory implementation and that sort of password hashing. But in the app, it's mostly going to be the devs. Or you choose to go and do the social login sort of thing and you make it someone else's problem. Also, I mean, if developers are putting that much effort into, say, doing record level encryption on personal information, that's all great. But if you don't store your keys correctly, so if you just, as you said before, store keys in a text file on a disk. Who would do that? Sony. I've seen that. I've also worked for companies to do that as well. Private keys on a hidden folder on C drive. Yeah, because that will fix. That's protected. No problem. I mean, a lot of us work in regulated industries, so finance, healthcare. So we really need to start looking at things like hardware security modules or Azure Key Vault. Right. Still feeling like an operations set of tasks, right? Even when we talk about database storage as a whole, I've got a DBA. He's crazy, right? Like, he's been beaten up by the infosec guy enough and everything written to that machine now is encrypted. And the dev hasn't got a lot of responsibility for it. He's going to be calling store procedures. He's going to write encrypted data. So here's a story for you, if I can speak. So here's a story for you. We at App v. Next, which is my consultancy, we pay our developers and our consultants, usually, by ACH. And in order to do that, we had to set up a special ACH account with our bank and they came out to the office and they told us everything. And they came with an envelope. And then the envelope was an RSA generator. That was a battery operated little fob. And every minute, a different number, an alphanumeric number came up on the screen. I think it might have been 12 characters or something like that. And so when you go to the website to log in, you're supposed to put in the number that's on the screen. And so the preferred way to do this is to wait till it flips, because obviously you don't want to get caught. You wait till it flips, you put it in, and there's a secondary algorithm running the same algorithm on the server that is matched to that key. And so it will come up with the same number every minute. And that is the way that you get in. So there's never a password per se. Oh, that's in addition to your regular password to log into the system. So there's two levels of security. I thought that was really brilliant. You know what some people do? So this is the interesting thing. We get good implementations like that, and then people go and stuff them up. So for example, you see instances of people getting their RSA token, sticking it to a board, pointing a webcam at it. So it doesn't matter where they are, they can go and actually access Webcam. I saw a guy write a blog post recently where he even wrote the code to OCR the token from the webcam so that you didn't want to type it in. So as good as we build some systems, there's always going to be someone who wants to go and screw it up. That is software as a service. Now is there a combination of that technology with maybe a near field RFID or something like that that can... So as long as they have it in my pocket, it reads the number from it and can log me in. Now that's kind of convenient. As long as it's secure, of course. And we all know RFID is really secure. It is. That's a joke. The developers have got the responsibility of kind of like, okay, they need to educate themselves enough to know what they're supposed to use to do. If they take the basic implementations like Troy said and do SHA-1, they may not even know that this is bad. That's another problem. We start seeing, oh, I'm hashing my passwords. Oh, great. What are you doing? I'm using MD5. I'm like, okay, great. I'm using ROT16. Well, if you use MD5, at least you're saving on disk bags. But the thing that they may not know, and that's the thing. I said the education factor of kind of like, okay, why shouldn't I do this? What should I be using? And people like us are saying, you should be doing this. Here's our cheat sheet for kind of avoiding certain pitfalls to just get you beyond level zero and level you up to one or two. And then you start to begin to see the lights like this and get to figure out that, okay, yeah, maybe I should be looking at this better. I think the security culture building that into the concept that if this data is stolen, what will happen? Rather than just people just saying, oh, I've managed to create a user in the database and I've managed to retrieve the user in the database. I've done my job well done to you. So we should be scaring our developers. I like that. All they got to do is listen to Troy's.NETROX interviews and they'll be scared still. But I agree, you know, but the fear of Troy. There's a cheat sheet. The shows we've done recently, speaking about security besides, you know, terror with Troy is... Can we get that as a podcast? Terror with Troy.com The O-WASP top 10. And it kills me that number one is still SQL Injection. Yes, it kills me. I got a talk yesterday on it. It was like people are like, oh my God, you're still, are you kidding me? Yeah. Well, and I totally get that there's a whole bunch of legacy websites out there that are vulnerable and there are tools that will help you find them fast, too. And then help you exploit them even faster. But I would just hope we're not green-fielding SQL Injection. Yeah, we are. Yeah, we are. And I'll tell you why we are. There's a blog post that I show quite a bit from last year. So, you know, talking 2015 where a guy has written out how to do effectively a password reset in ASP.net C-Sharp with web forms, which is, I guess, a bit unusual in 2015. But anyway, that's what he's done. And the whole thing is, it's odd actually. It's got one section of SQL code, which is beautifully parameterized and works great. And then the next section beneath that is just SQL Injection all through it. And this is a new tutorial, you know, new each one year old, written there that then has a bunch of comments from all these people saying thank you. That's very useful. So how many people then go on and copy that and build their new stuff like that? Right. You know, it still happens. Still a lot of new stuff that does that. Because we, like, you know, there's, have you guys seen the O-Really? Yeah. Yeah, it's like, you know, a copy pasting answers from Stack Overflow. Yeah, it's a book cover. Like an O'Reilly book, but it's O'Reilly. O'Reilly. Fake book covers. It's brilliant, but like that's what a lot of people do. Yeah, exactly. What the, it's security. How to ignore it and deliver your project on time. The thing is that people will just go, this code works. I press F5. The result that happens is what I expected. Grace, don't touch it. Don't understand it. Run away. Ship it. Ship it. And I think that's what we're starting to see a lot of, and there's, you know, developers are, we're very good at going, here's a problem. I've written some code that solves said problem, but I haven't thought about who would actually use this. And it's like trying to give, you know, my son like a knife or an axe or something. Because he'll just go, oh, look daddy. So that, that's what happens. I think that there's going to be more like that. I like the model of physical keys. Like you have a key to your house. You have a key to your car. And this security model has worked pretty much for a long, long time. And the whole idea is that there's one thing that's unique that you keep on your person all the time. And that's the only thing that's going to get you into your house. Well, I mean, what if your laptop had a look at ignition switch, you know, you put in a key and turn it, and then you can use it. And all of the security flows from that. I have a really good joke. Your house was good until you had windows in it. No. Boom. That one's taken. Yep. Even the key thing on the house, I mean, the idea now is that you got to IoT the house, right? Right. So you don't, because this is the beauty of it. Who wants to carry around a key, they say, because you just walk up and you put your finger on the door. And hopefully it's your door that opens when you do that. But we're seeing vulnerabilities in these sorts of things like there's an IoT doorbell. Apparently you need to have an internet connected doorbell. And you'd ring the doorbell and there's a camera there somewhere. But the problem is, is that when your doorbell rang and you looked at the monitor to see who was ringing it, it wasn't your house. That was ringing. So when you start to wire these things in, I mean, it just doesn't take much to go real wrong. Yeah, I agree. We ran up against this today. I don't want to say that. We'll edit that out. That's what's going on at that point. Change my mind. I'm not going to say that. Yeah, we did a show with Kim Carter when we were talking about Infosec. And he talked about the OWASP ZAP library. And just building that is part of your test sequence. When you're building a website, you're running through a set of security tests. And one of the things it's checking is the straightforward SQL injection vulnerabilities. The things that the script kiddies would run to try and breach your site, at least you're trying that yourself. If you've got a build server, there's a ton of CLI tools out there that you can just plug in, like Necto. For example, just do Necto-h, and it's a program that will run through all the different vulnerabilities it knows and quickly scan out your application and give you back a kind of, oh, we found this, we found this. You're being silly. Then there's more advanced tools like Arachne, which is a full-on web framework. And if you're running Windows 10 with the new kind of anniversary updates on it, it runs Bash. So you can run these natively nearly now. You just install it, and it generates a nice web UI. It has feedback. It gives you, like, here's all the percentage of stuff we found. You have a SQLI test, have an Ex-SS test, and it can run these in command line. But it also gives you back a nice kind of, this is a bug for you, and you can discuss it, or this is a false positive. And managers can see, like, okay, your security tests are getting better. And then you can start doing other kind of static analysis testing and other type of, but nothing kind of fails. Our work's better than the kind of the human to test the logic. These only just test the common little bits. The stuff that we find a lot is, like, people where I go and I go, I try and walk through the code a little differently. I think that's it. But I definitely recommend if you've got no baseline or you're starting from scratch, some of these little tools, straight up, they're free. Just knock them into your application set, and you can be much quicker up and running. I mean, we use a tool, kind of a third-party cloud tool called Tinfoil Security. Yeah. Tinfoil Security. As in Tinfoil Hat. Yes. So it does a kind of external test against your websites. Right. And then it gives you security reports of any vulnerabilities. Is that built into Azure right now? It's built into Azure, and you can get it as a plug-in to Cloudflare as well. Wow. So there are some, I mean, and I just like some actionable items here. Yeah. At least to have a sense, you know, and I think all of you at one time or another has done a hack yourself first kind of session in the past few years. It's like, we should, this should be part of our process as developers is hack on your own site so that you know what vulnerabilities are absolutely apparent. And some of them might be your responsibility from a code perspective, and some of them might be your operations, but if you're not building that into a test sequence, you're just not even going to know. Are there any other tools, Troy, that you want to mention that we should be looking at for analyzing our security situation? I think the main comment I'd have on tools, because I saw Niles talk yesterday as well, where he shared Nikto, and you're running it in Kali Linux, and it's, you know, a lot of this stuff is very cool. I think one of the challenges is for a lot of developers, particularly Microsoft developers working ASP.net where this is really foreign territory. I mean, out of interest, how many people here want to spin up Kali Linux and run unknown tools? There's one guy, and he's got a hat and a beard. He's also dynishing a bit weirdly, though. I think we have then established the point. It is a barrier to entry, and this is why I kind of like the tinfoil-slash-cloud-flair sort of stuff, where you effectively just route your application through some sort of a proxy, which looks for these sorts of things, gives you a layer of security, and by no means do I want to suggest that you shouldn't go through and actually understand SQL injection across site scripting and all this sort of thing, but there needs to be low-friction stuff that works in a fashion that developers are comfortable with. Otherwise, I just think it's too big a leap sometimes. Sure. Yeah. Agreed. Last words, gentlemen. Niall. When do I get my whiskey? No. Steven. I think I'm just tightly over the moon that the airport strike is not happening. Yeah! Awesome. That was causing me considerable stress yesterday. Okay, if I want to do a real kind of last words, guys, seriously, don't make it like that we can walk up here and do really interesting talks because you've screwed up. Put these guys out of business. Yes, I'm nearly kind of suggesting people try and do that for kind of like secure code, secure data, think really kind of closely about what you are working with and what happens if it gets out in the wild, who will be affected. Scare thyself. Yes, exactly. Troy? Yeah, look, I think it's, for me, the big thing is experience stuff firsthand. So whether you go and get that in plural site courses for either of this stuff like hack yourself first, which is a really popular one of mine where it just goes, here is how to do SQL injection. Like go and do it. Don't just Google for a website and do it. Like do it on the site that I created it for. But experience it firsthand because once you get to play at this stuff, it's actually pretty cool. And a lot of people get really involved in it and then sort of get a newfound passion for it. And with that, I guess a key thing to say there is if you're going to try and do these things, do it on a site that you own, not one that you don't. Otherwise you might be living in a box for a few years. Do it on Troy site. On my site. That's fun. Yeah. All right, well that's the show guys. Let's give it up for our security panel. I will see you next time on dot net logs. That's enough for the show. But if you've got a minute, maybe you could take some questions. If anybody's got a question. That's handle left there. Sir. What do you think about the government providing identity services? Government provided identity services? Yeah. Interesting. I'm sure that'll work out just fine. They never get hacked. What could happen? So the Philippines election commission got hacked a couple of months ago and they lost half of their 110 million people. Well, they didn't loot. They know where they are because there are lots of backups now. I think it is in one way it is necessary because you've got so many different government services which sort of need to get tied together to function effectively. In another way, it's unavoidable because this is it's not one of those things you can say, well, look, I just don't want to give my information to the government. It's like, you kind of can't do that. So it's happening. And I think that on balance it provides us a bunch of a bunch of positive things, but you just kind of hope they're not going to do a Philippines. And the kind of comparison contrast is to when Microsoft is asking us to use identity service from them. Yeah. And I wouldn't, you know, who's more qualified theoretically. But either way, you've got the same issue. You're going to put your identity responsibilities in someone else's hands. Could you confirm that you would perhaps see across companies, across services? In the public cloud you could. So I think it's interesting. Another question? Is there a hand back here for me? Right up front. Are you scared of using the Wi-Fi here? Are you scared of using the Wi-Fi here? It's open Wi-Fi, so, and Troy's in the room. Yeah. And he's got a pineapple. He's not afraid to use it. There's only about six or seven of these devices running around the conference right now. So, you know, yeah, if you go to open Wi-Fi, just assume everyone is listening. So, you know, it's intercepting stuff. So just get a VPN. That's why my VPN's on currently sitting next to Troy. And he's not even got the Wi-Fi pineapple on. Hey, if you haven't seen these before as well, these are Wi-Fi pineapples. So if you've seen my talks in the past, you would have seen me use this. But they're really super cool. Just Google Wi-Fi pineapple and it allows you to hijack people's Wi-Fi, which, okay, is cool. That's one thing. But it's a really, really good lesson about how you can't trust the transfer player as well. They are amazingly cool. And they're only at like $100. That's it. That runs off a battery pack that's currently suspiciously there. Let's not mention the battery pack. Tell them what it does, right? Yeah. So what it does is a rogue access point. So every time your device pings out and looks for a specific access point, this device says, I'm that. So yours then goes, ooh, good, free Wi-Fi. So if you open up your phone and it says automatically connect to known Wi-Fi points, this device will say, I'm known, you'll connect, and it'll go now between me and the internet sits Troy. And he's a lovely Australian man sometimes. And he wouldn't ever fiddle with whatever I'm doing to go onwards. But it is, that's the thing. You can't trust the transport layer. Well, the first time your phone connects, what does it do? It goes, it gets your Google mail and Facebook and whatever and connects to Twitter and sees if there's anything. Sending all those passwords out. But that's why all of those services are now SSL. That's right. But that big wave that made all of these sites go SSL was because of these kinds of exploits. The real question is, is your mail server using SSL? With the username and password sending clear text that it's not? Like pop3 is not secure. Yeah, it's text. It's just text. So you just kind of, all these little devices, you can plug this thing in and no one, it's like if Troy holds it up again, it's the size of the thing, it just sits in your backpack, it's tiny, can be powered off a little phone charger. And you know, all of a sudden, you can always spot when these things are on, because all of a sudden you see a load of Wi-Fi points you don't recognize and you're home one. You see every Wi-Fi point. You see every Wi-Fi. Yeah, so you never get funny looks going for your airport security without it. It's just an access point. Yeah, it's just a router. It's just a router. And it's kind of big for an access point. Like I've got one that's not much bigger than your thumb, right? They make them small now. Nile and I do also have some rather obnoxious ones. And Nile in particular has some big aerials, which you always ask for other suspects. Yeah, I've got the nine decibel aerials, which are kind of like... But it's like four of them as well. It looks like an upside down spider. Or half an upside down spider. And they're exceptionally powerful. Like you plug them in and you turn on the hotel and all of a sudden like four floors are connected to you. And everyone's wondering why their home Wi-Fi is working. It was a funny story. That's how I actually got to Oradev. Turns out when I was over in Poland, I plugged on this and one of the organizers from Oradev connected to my Wi-Fi point and decided to log into the Oradev database, which was running over HTTP at the time. So username and password for the Oradev database pops up and I'm like, ah, nice. Oh, look, I'm on the agenda. So ladies and gentlemen, if you ever want to figure out how to get to a conference, you know, pineapple. Alright, I think we got to leave it there. Another round of applause. Thank you.
Join Carl and Richard from .NET Rocks as they talk to security luminaries about the challenge state of affairs in security breaches and what developers can do about it. Are there coding solutions to these security problems, or is it up to the operations folks to keep data safe? What is the correct response to a data breach, and what should you do if it’s your data that’s been stolen? Are we all doomed? Does security really matter anyway?
10.5446/51819 (DOI)
Okay, thank you all for coming. Just one practical thing, there's a box outside, somebody wants you to push one of those buttons. My name is Harald Schurter-Rixon. I work at a small company called Outroom. I'm currently in parental leave, but when I'm not, I've been with NRK on the TV, and a radio player team for almost two years. A lot about this caching is from our experiences and how it's all things there. So let's dig in. Black Friday, that is not something new. It has become the manifestation of how we're not able to scale. It happens every year. We all know Black Friday is coming. I'm sure enough some sites crash. And here we have, on the left side, we have Best Buy, failed miserably, unable to serve the customers. On the right side, we have Walmart. They've been quite cocky for some time. They, after they switched to a new platform, they've not had any problems. They did a deploy to production during Black Friday. That's how confident they are. We have the same situation in Norway, and we're a lot smaller country. But if you tried to go on the midnight of Friday to any of the electronics store online and shop, it failed. Komplett was serving 200 OKs. They did render a page within a minute. That's better than the rest. The rest did not serve anything. And that's quite funny to see that you have all these shops that have spent millions in advertising just not to sell your stuff. That's a success. So we need to fix that. We need to talk about this. Why this doesn't work. And then we see this is private sector, everybody thinks that should work. And then we have government. We have the Norwegian edition of the IRS, Scott D'Altin and Alten. I guess everybody runs to check their taxes when you get the pre-filled form to see how much you get back or how much you have to pay. That fails as well. But I can understand that a lot better. They are under heavy regulations. They cannot scale the same way in cloud. It's sort of fair. I can see why they can't do it. But they're getting better at it. And at NRK, our Black Friday is the election. We get a lot of traffic on the news side. The news side displays video. They need metadata that comes from my team. And we saw that this was not going really good. And one of the operations people went in, doubled the number of service in Azure. Everything was running fine again. And during my rant on this, I said to a friend, well, you can buy 100 service in Azure. You can buy it in Azure for 24 hours. A3 instances, that's 7 gig of memory, four cores. That will set you back $860. And he said, well, you can't do that. You will take down your database. I don't want to hit my database. I see no reason why we should hit the database when you hit the first page. And that's where Cache is coming in. There's a lot more to delivering an experience on their extreme load than caching. It's just a small part. We have everything from domain modeling. You have infrastructure. You can use queues. You have patterns such as circuit breaker, especially useful when you depend on other services, which they also get load during Black Friday. So perhaps your credit card service can't respond. And then you need to sort of, can we take that component out? Can we wrap it in something like a circuit breaker and continue? And such things will help you a lot. I hope, I know the complete guys will speak tomorrow. I hope they will talk about this. But caching is one part of making this whole thing work. And today I want to talk about two parts of caching. If you see from web client, we have HTTP cache that lives with the client. It can be a proxy on the way to the server, but normally it's very well implemented in the browser. We then have output caching cached in IIS or perhaps a vornish server, et cetera. And then at the other end, you have the application cache. And you often need to hit your application because you have a user's logity and you may need to display information for this user. So we cannot necessarily always use the output cache and just get rid of the problem. So today I'm going to talk about HTTP cache and the application cache. CDN, I sort of, that's CDN, so I'm not going to mention that. Of course, you have your static files. We have our streams. And images will be served from CDN. So HTTP caching is defined in RFC 7234. It's part of the HTTP-based split, where they split up HTTP specification into separate documents. And the purpose of HTTP caching is to significantly improve the speed and experience for users. This is done, you have two things that happens. One is that the latency goes down, request is served locally, and network traffic goes down. So that's a really good thing about HTTP caching. We don't hit our servers. But it's not, as always, it works. It's all about storing responses when you talk to the server and hopefully reuse that response instead of calling the server. And it says that a stored response can be considered fresh if the response can be reused without validation. And this is somewhat like milk. You know that the content will expire at some time, and you really don't want to use that milk until you check that it's okay. And if it's fresh, that is the, if a response is considered fresh, the client will just use it. It will not talk to the server. If it has to validate that response, it will call the server and we will hopefully get what is called a 304 response without any content. And one thing to keep in mind when we talk about this specification is that it is not written as to how you should cache. It is written in terms of what you should not cache, what should not be cached in the client, and it's also what should not be reused due to something. So I read and you can read this one. So as I said, it is what not to cache and what not to reuse. And the first part of this is determining is this response fresh. That's all very well implemented in the browsers. They have an extreme good understanding of this, and you will have a completely different situation if you're making an API, and you have API clients that are not browsers that are let's say the HTTP client in.NET that will not have caching built in. So if you're trying to use some of these techniques with an API, you absolutely need to get your clients to really use it. You need to educate your clients. So these first three are headers. They are sent in the response from the servers, and it tells the client how long this response will be valid. The last one, heuristics, is when you don't have the other headers. If the server does not give you a max age or a shared max age expires, the user agent will try to calculate if the response is still fresh. That is done by looking at the last modified you have in a local cache in the browser. It will look at when you last requested this, and then it will look at now. If you're requesting it now, the time between now and your last request is 10% or less of when the document was last modified, it will still be considered fresh. It will not necessarily call the server. This is only the suggested implementation. There's no specifics in the specification that says this is how heuristics works, but there's a suggestion on it. So if it's not fresh, we can then ask the server, is this response still okay? To do that, we utilize two headers. There's e tags. Has anybody used e tags? A few, yeah. And there's a last modified. These are combined with two request headers, if non-match and if modified since. So what happens is that the server will, or can, add an e tag. Let's think of this as a hash of the content in the response. And this, when the client wants to reuse the response with an e tag, it will send the e tag to the server with an if non-match. So the server will see, okay, has this changed? If the server still has the same e tag, it will just give a 304 to the client. The client will get a really small response. They can be multiple e tags so that there can be multiple valid responses in the cache. So the server may have one and the client can have several. And if one of the match, it will get to 304. e tags are also, comes into flavors. There's something called a weak and a strong. The weak e tag means that it's not byte correct. So we cannot use that for range queries, but it's semantically the same as you have. A strong e tag means that it's byte correct. So you can actually do range queries on it and just get a part of the document. So that's one way to do validation. And then you have the last modified, which is a date. Now HTTP date is without milliseconds. That means that if you have a frequently changing system, you will not be able to get, to use last modified for this because you won't know if you have the same or not. And you use last modified together with if modified since request header to check if the server has changed since then. Validation can also be used for post inputs, but then it's used to validate if the server has changed when you post or put something, not for caching. There's a few more links on this. If you want to read and dig into it. Mark Nottingham has written quite a few of the specs. He also has a tutorial on caching on HTTP cache and how that should work. Darrell Miller has created flow charts. If you really want to dig into how browser caching works, it's really nice resource. And then there's a little link on heuristics. We're going to touch slightly on ETags later on. So now that we have sorted out the client cache, let's move over to the server. This is the ASP.NET Core or whatever it's called now. Currently the namespace for this is Microsoft's extensions caching. They are now delivering two types of cache with a new.NET Core framework. You have an in-memory cache which is powered by a dictionary, supports callback on eviction and when keys expire from a cache. You can put priorities and cache items. And it will compact the cache by 10% on a garbage collection. So if whenever a garbage collection of Gen2 happens, 10% of your keys in the cache will be thrown out. And they think that any garbage collection of Gen2, that means that you have used so much memory so.NET probably needs something to be freed. And remember that if you are on.NET 4.5, there's a limit on 2 gigabytes in memory. Now this is Core, so I guess it's changed when we get to that. They also have a distributed cache. Useful when you start having multiple servers and you need an essential cache. It comes with three implementations in.NET Core, a memory cache based one, one which is based on Redis and one which is based on SQL Server. Now with Redis on Azure, AVS and Google Cloud, as most people will use the Redis implementation of it. It's also worth to note that you will be responsible for serializing your objects in the cache if you want to use this one. So local cache, as simple as a dictionary. Thing the data lives in memory. So it's fast, it's a lot faster than going to another process because it's in the process, it's yours, it's available. A word of caution with local cache. These are objects, they are mutable. So if you retrieve something from the cache and change it and then you retrieve it from a cache on another request, it's changed. So we had a fun bug with that one. For fun, I think Björn and I spent three days digging it. So let's see how that can look. So what we have here is an interface following a pattern called cache-aside. Cache-aside is one of the patterns recommended by the Microsoft Patterns and Practices team. For those of you who were on the Patterns and Practices event, Microsoft had it in Oslo, you should know this. So I just split the interface in two so we can concentrate on one part of it. We have the get call to the cache which takes the cache key. The data takes a funk and this is where you call the repository where you retrieve the data from a slow source. And then you have a time to live which you can tell the cache, I want this to live in the cache for that long. If we look at the implementation of this, it's fairly simple. The code I have here is on GitHub, it's all running on the old.NET. So I use the system run time cache, memory cache implementation. It calls and tries to get the key, if it finds the item it returns it. If not, it will execute the funk we passed in, add the item to the cache and return it. It's quite simple. Here I have a small service and there's a service out there called randomusu.me. If you need some JSON data, it's a nice place to get test data. So we see when I call this, it takes almost a second on this Wi-Fi. And then we can try it with the local memory cache. We see that I'm 32 milliseconds, so it's working quite fine. We can let's just take that one and we can let it run for some time and see what we get. It's not very interesting. Right now we know it will be fast, but it will be more interesting when we get to the remote caches. So 4,600 requests per second. Really fast. It uses a lot of bandwidth. So what happens when we move away from a local cache to our mode cache? So let's say you start load balancing your servers. Let's say you run in multiple data centers, etc. Is it OK that you have local cache on each server? What happens if you use it, hit server 1 and then after a while hit server 2? The cache is different because they've been initialized at different time. User will get different responses. Is that OK? Probably not. And that's why you want to move to a central cache, such as Redis. So we did that at NRK. And actually we had the Azure Managed Cache. Has anyone used that? Yeah, one, two. It was, I think, Microsoft's third attempt on making a cache in Azure. What was nice about it was that it came with a client from Microsoft which had a local cache as well. So when you called remote cache, you got something. You would automatically get it in a local cache. So we had the fast response times. Now when we move to Redis, because the managed cache will be discontinued December, so everybody has to move. Microsoft has no longer made a client for the cache. They use the client library from the Great Guys at Stack Exchange to talk to Redis. But that does not come with a local cache. And the helpful MSN page says that you should develop something yourself if you need that. That was what I did. So what happened was that we saw the CPU and haywire. We got really terrible response times and really tried to dig into it. We saw that we spent almost 56% of the CPU time on deserialization. And that's because we follow the guidance from Microsoft and use the binary format. All serializers are not equal. And Yanqui, he's also a speaker here at NDC, he has done some benchmarking on serializers. And there's not only for binary serializers there's a lot of difference when it comes to JSON serializers as well. His website is Burning Monk. So if you Google Burning Monk and benchmark, you will get to this page. And BAN there is from Microsoft. It will not be that fast unless you tweak it. It comes with quite a bit of configuration which needs to be done to get it up to that speed. It has to do with objects, heises, depth, et cetera. By default, Protobuf is a protocol from Google. The.NET implementation is again from the great guys at Stack Overflow. It's really fast. Message Pack is as well. They work in a little bit different way. And it's perhaps easier to start with. That's one we'll see here. So when we start, run this demo, I run it on a local Redis instance. So Redis is a cache server originally in Linux. Single threaded. So you want to think about it when you configure it. That you may want more of them. It's ported by Microsoft Research and you can install it using Nougat. And what I like about it is that we can easily, let me get it here. Let's take a new one. So it comes with a command line interface. So I can connect to the cache. I can see the stats on it. We can see the keys in there. And we can add and delete keys on it. And you can, of course, connect to your Azure caches as well if you run Redis in Azure. So if we try to call this service postman. So if you use the binary formatter when we do this, I have a modified version here. Of my call, it will return 2000 people. And if we try to just load our system with that, remember that we don't have any network latency. So all what we're seeing now is it's out of process, but it's still on one machine. So it's now running pretty bad. And then we can hopefully get some better result if we use message pack. So yes, we could handle more requests with a different serializer. So code for that is quite simple. When we work with Redis, since it's single-threaded, you want to have only one connection to the server. The client library will multiplex and pipeline all the commands for you, but it doesn't make sense to have more connections. The server won't process out of them anyway. When we look at the ad for Redis, it's quite simple. The only difference is that we need to serialize the data. So there's an interface called iItemSerializer. The same when we get data, we need to deserialize the data we get. So we have the binary format deserializing, nothing magical going in here, just taking the binary formatting, serializing it using memory stream and not put it. And same with message pack. Use the message pack serializer. This one caches the types for you. It makes the serialization fast and uses memory stream as well. So how does that look in production? Quite good. We see we have a lot of cache hits. We like that. So a few cache visits. What's worth noting here is the network bandwidth. When you use a Redis in Azure, you normally see in the pricing, you pay for size, you pay for redundancy, you pay for if you want to have local storage or just live in memory. But you also pay for network bandwidth. And they're not really clear on how much network bandwidth you get. It says you get low network bandwidth, medium and high. And certainly things start crashing. And that's probably because you've hit a network bandwidth issue, or it may be. That was our problem. So to solve that, we wanted to implement what we had with the managed cache that we have a local cache and a central cache. So it works a bit like this. The application calls local cache. If it's not there, it will fall back to the remote cache. If it's not there, it will call this execute a method from the application and call the slow thing, aka your repository. And all the way up, it will add the data it received to your caches. And our implementation of that is called a double cache. Again we implement the same icash aside interface. This is in pattern terms, I would call this a decorator. It takes local cache and a remote cache implementation. When you add something, it's simply just added to the local cache, which would be the memory cache we saw earlier. And it will then also add it to the remote cache, which will be the redis cache. When we get things, it will first try to get it from the local cache. If not, it will fall back to remote cache. And those two will be responsible for adding the item to themselves. So we still have one problem with this, and that is that the caches will be out of sync. The data will have in the local cache, it will depend on when you got it from the remote cache. To solve that, we can use a feature in redis called pubsup. It's a really simple way of distributing messages. This is used in CineLar, for instance. They use redis as a backplane. When you have multiple CineLar instances, they use redis to communicate between the CineLar instances. The same goes for socket.io, for those of you who use that node. So what happens is that one, we store the data in redis, two, we push the notification, and three, that is published by redis to all the subscribers. And then we have the local cache on each web instance to update their data based on the server. So now let's get a bit more tricky. We have a cache publisher. Simple interface just tells to notify, I have updated a key and the data type. We need to pass the type of the data, since the cache will be responsible for deserializing it into memory, and it will not know the type. And then it also unnotifies delete, so if something is deleted from the cache, we can do that. We then have the cache subscriber, and it will take an event handler for when the cache is updated or deleted, and it also has a method to get data from the cache, which is used. And then we have the implementation of this, a publishing cache which takes the cache as ID interface. It also implements the cache as ID interface, but it takes the remote cache, we'll be in redis cache, and it takes the cache publisher. When something is added, we add it to the cache, we then notify that this key has been updated to all the subscribers. If you look at the get, it's just a normal get. If we invoke the method, our repository, we will also notify that we have updated the local cache. And at the bottom here we have the... Let's see, that's the publishing cache, let's see. And then we have the subscribing cache. This will wrap the cache as ID interface again. It will wrap the local cache. It will cache the deserized types, and it will attach this on the subscriber, these two methods. And when we get the message that the cache is updated, it will get the data from the remote cache, check if there's a time to live or not, and add it to the local cache. And this means that all our local cache instances will be in sync. And it also means that we can, from other processes, forcefully invalidate and update our cache. So if you have a messaging infrastructure, and you get a message that some data has changed, you can proactively invalidate your cache. We had one problem with NRK when there were so many layers of cache that we could use an hour for something to time out, because we didn't have any invalidation. That's a problem when you reveal the winner of some reality show by mistake. Then somebody gets panicked. So it's really nice to be able to invalidate, forcefully update the cache when something changes. And let's just see how that can work. So here's our PubSub cache. So notice that, again, we have the really fast response time, since we do hit a local cache, we don't hit the remote one. We have our foreign mister here. I've a lot pronounced that. And then we can, I have another small program. Here. And it's called the same service, called our cache, updated it, and sent the published message. So we find now here, hit it, miss the jake brown. And this is really, really gold. And what happens when we start to use this is that we see that, this is from one of the servers, we see some performance counters. We see that we do have a lot of local cache hits. We do have some misses, where the data is not in the local cache. And then we hit the ready server. And all of these will go to the database or some backend service. And this is really what saves us, this part. Now we can, of course, go wrong. You have, I'm not saying that you should put all your cache in a local cache, but you have more memory in the machine than you think. So we took all the TV shows at NRIK. And when we serialized them and stored them in Redis, that was about 700 megabytes. Not the streams, but only the metadata. If we took only the 40,000, 60,000 active TV shows, where we have the display rights, which you can actually watch, it would go down to 300 megabytes of data. So we can, in fact, keep a lot of our data in memory at all times. That's a good thing. And the really good thing that happened is that the network bandwidth dropped. Before we had this local cache wrapping the remote cache, we were up to 2.7 gigabytes per second of traffic on our Redis instances. You won't get that normally. I think they were nice to us since we were running Redis before it had gone in general availability. After we added local cache, it dropped to 6,700 megabytes of regional bytes. And that's, it wouldn't go around if we had these high numbers. So what time is it? Any questions on this? Okay, let's continue. I have some goodies in hand. This is more than just a key value store. If you use it only for key value store, I think you're perhaps is not utilizing it completely. I recommend you all to search for a document called an introduction to data types and abstractions. Really nice to read that one and try to mentally map that to how you can use those data models in your application. Say you have key value store, which is the normal one, which is used for cache, but you have lists, you have hash sets, you have sets and sorted sets. On the sets, you can do intersection between things. And then you have hyperlog log, which is used to count really, really large amount of things where you need to count the number of uniqueness. And then you can also do geo-spatial searches in Redis. We use this, and then you have Lua script. Sorry, the region text on that one. So you can execute script on the server itself. It's an atomic operation. Since you are running Redis single threaded, only one thing can happen at a time, and that gives you sort of the protection. We don't need to think about transactions. If you wrap it in Lua script, it will run in that sequence. The scripts can be uploaded to the server, and you just call them by calling a hash. And at NRK, we use it for the now playing data. And we keep the, to sort the now playing data, we use the sorted set, and then we keep the song title and metadata in the hash sets. And to add an item to it, we use Lua script, where we have a channel ID. We get in the program, the ID for the current song. And it starts some JSON data for it, and how long it should live in the cache. So first we added to the sorted set for the channel, and then we added to the hash set, the JSON data, and then we purge expired items. We don't want to keep all the tracks in Redis, we only keep what is available in the live buffer on our streams. And then we read it, it's just get the sorted list, and then we look up the elements in the hash set to sort them. A word of caution if you're going to use Lua script, if they get too big, they take time. And since this is single-threaded, if you start using a lot of time on your Redis server, your cache will be slow. We burn quite good on that one as well. Then really see how much we were tormenting our servers until things started to fail. That's not a fun thing, but one experience we learned. And you will get in Redis, you can see the words performing scripts, so you can log into the service in command line. And it's like a slow blog where you can see the slowest scripts you have. And that's it. Yes? Between cache items? Yeah, we have some that's handled by Lua script. It's a way to do it. Yes? You need to serialize and utilize the data, or can you just cache byte blobs? You can cache byte blobs, that's why we take the object memory and serialize it to some bytes. Because you provided the serializer and the serializer? Yes. You can choose not to. Yeah, you can choose not to. It's just, yeah, we had the object, so we need to serialize them. Thank you all for coming. We will see you in our next video. Bye. Bye. Bye. Bye. Bye!
Could you survive Black Friday traffic loads, or will your app fall over if it goes viral. Do you know what your etags are doing right now? All of these are important questions and this sessions aims to answer at least some of them. We will go through how to get caching working for you and show you some real world examples. It will start small at conditional HTTP requests and build up to application caching using a layered distributed cache service driven by pub/sub. "There are only two hard things in Computer Science: cache invalidation and naming things. " - a famous saying. Let's make sure we're only left with one hard thing, naming the next .NET.
10.5446/51821 (DOI)
Hello. Hi. My name is Einar and I work as a front-end developer at Beck Consulting and I'm going to talk today about our journey of organizing our CSS. We recently had the opportunity of starting fresh on a project and we thought we might as well do a little effort and make our CSS better than our old CSS, which is not so good. A lot of people love to hate CSS. They use this GIF for showing how CSS can behave sometimes. But I think actually that CSS can actually be awesome. So what's out there? I'm just going to go through a little of the methodologies we researched while figuring out how we were going to organize our CSS. The first one being object-oriented CSS, which encourages reuse and more efficient CSS. It's based in two principles, separation of structure from skin, structure being everything from margin and padding and skin being color and background and so forth. And separating container from content, which means that there's no hierarchy in the CSS every little piece of every class should behave the same no matter where you put it. One drawback from this is that you might end up with a lot of CSS classes in your HTML. And that's something I personally don't find as nice. Another way to do it is atomic CSS, which I feel like is kind of like object-oriented but takes it on a whole different level. You break every piece of CSS down to the bare minimum and something like this would be broken down to something like this. And then again, a drawback would be that your HTML actually ends up describing how everything looks and you get a lot, a lot of classes. In 2016, we now have something called the atomizer, which again takes this even further. And you describe the whole style in the HTML and atomizes parts of your HTML and generates CSS file looking like something like this. And you don't even have to write any CSS at all. But I feel like that would be actually like styling everything in line. Atomic CSS should not be confused with atomic design, which is more like a thought guide for anything, not just CSS. You break everything down to the smallest possible modules and then you call those atoms and then you build molecules out of multiple atoms and then organisms out of multiple molecules and so on and so forth. Which is a very nice way of thinking folder structure or file structure for your CSS and actually for many parts of software development. You also have something called scalable and modular architecture for CSS or smacks. I'm even wearing a smacks t-shirt. But even though I'm wearing a smacks t-shirt, I don't believe it anymore. Or I used to be a more, yeah, a follower of smacks than I am now. It's basically that you break everything down into five categories, space, layout, module, state, and theme. If you're interested, it's much more, you can read it on their website. It's free. And it's a good read, actually. And lastly, before I come to what we ended up with is BEM, which people might be familiar with, kind of popular at the moment, I feel. It was developed by Yandex and it pretty much says that you break everything down into block elements and the block element contains, and the block container contains elements and modifiers. You can kind of visualize it as an object with multiple objects inside and properties. An example would be the media example down there. So what did we end up doing? Well, we've read a lot of articles about all the different ways of doing things and we found something that we felt would be nice. It's called Component CSS or CCSS, also known as Simplified BEM, where you have some global utility-based classes and the block element modifier structure of BEM. Only instead of double underscore for an element, we have just a simple dash with camel casing. And this is actually very nice. Wait, you can actually use a pre-processor like less and SAS and use the, in this case less, the ampersand selector to just nest everything down and the code becomes relatively small, easy to read, in my opinion, and then writing everything over and over again. And you end up with no inheritance, just like object-oriented CSS. Everything is independent of each other, so the structure of HTML shouldn't have anything to say on how things will end up looking. And we have global utilities which might seem more like something you would do in atomic CSS, where if you just have something small you just want to add here and there, you could add a modifier class or just use a global utility class. Some setbacks, though, we found out where if you want to have a hover selector on the block element to nest that downwards, you have to repeat the name of the block and everything down in the hover statement so that you could change the state on every single part since there's no inheritance really in the component CSS. Or you can make a modifier class and use something like JavaScript to toggle it to your, to how you want it. That's actually, I think, the biggest setback we've had with this. The takeout has been that it works really great with React, which is what we use now in the project to toggle modifier classes has been really easy. It creates single-purpose files, analogical folder structure, especially if you add some atomic design elements into it. The files are small and maintainable and it works really great with a preprocessor like less or sass. But it's not perfect. You always wish it's just going to run smoothly, but you always find something bothering you. And it's still very early in the project, so we still have a lot of time to screw something up. Thank you. Oops. Sorry. Wrong button. There. So, here we go. I'm going to talk about HTTP, too, and what's the deal for us web developers. First a bit about me. I'm Eileen. I'm working at Arcando as both a front-end and web backend developer and I've done so for six years now. HTTP, too, claims to be faster, easier, and more robust. That's quite clear goals compared to HTTP. We will look into those three things, but first some brief history. For HTTP. One, something has really, really changed since the 90s. There was a revised set of RFCs in 2014, but basically what we are using today is from the 90s. And in 2009, Google thought that the Internet had moved on, so they released or announced speedy. And then we had a call for proposal for HTTP, too. And then Google in 2012 released the first draft on speedy, which was to become an exact copy for the first draft of HTTP, too. And in May last year, it was officially a standard. So, HTTP, too, is basically a superset of HTTP, too. Everything you do in HTTP one with status codes, methods, and stuff like that is still the same. You use the same URI schemes, the same ports, a standard, and so forth, so forth. But HTTP, too, has one major thing. It is multiplexing. With HTTP one, you needed to create one connection, one TCP connection for each request, which with HTTP, too, you can have multiple requests on the same connection. And it's called streams. Those streams are independent and non-blocking, so you can get a lot of streams simultaneously. In this way, we avoid a lot of work arounds we have created for HTTP one, like concatenation, sprites, domain sharding, and inlining. I'm going to show you a video. This is a comparison of HTTP two and one. This is set to 100 milliseconds latency, and each image contains 192 small images. On the right-hand side here, you just create one connection, and on the left-hand side, we have a lot of connections. And as you can see, it loads much, much faster on HTTP two than HTTP one. This is kind of an edge case since I put 100 milliseconds as latency, but still, you see the picture. Prioritation. HTTP two comes with prioritization. That means that a stream may be dependent on completion of another stream. And thus, you could say that this should not load before I have completed this one. A stream may also wait. This gives an opportunity to say that this is more important than another, and thus, you could create the dependency tree looking like this. This gives a lot of control to the client, but the spec says that the server doesn't have to take care of this, but it's a wish from the client, not a demand. But most of the servers implemented today handle this. HTTP two also has flow control. This could be enabled just for a stream or for the whole connection. This enables to limit the data going over the connection or the stream. Typically, if you have a device with low memory, you want to limit how much data you get into memory at once before it could be handled. If you have video streaming and the user presses pause, then you wouldn't stop or limit the stream so you don't need the whole connection. Compression. In HTTP one, you can use Gzip and stuff like that, but it's still a binary. It's still a text-based protocol. HTTP two is binary. That means that you can no longer tell that into an HTTP two server and write HTTP commands on it. But all the tools you usually use to do that now supports HTTP two. So if you use Wireshark or KERL or whatever, you could easily do that. As well, there's another RFC that was released together with HTTP two called HPEC, which is header compression. That means that you don't have to send every single header for every request. You could have some headers for the connection, some for the stream and so forth. It's also compressed as it's not in HTTP one. An example for how this could be much faster is if you have really large cookies, then they only will be sent once and compressed instead of uncompressed in HTTP one. Server push. This is kind of hard to wrap your head around. And it's one of the things in HTTP two that we really don't know how it's going to be used in the future. But the principle is that the server may speculate in what the client might want and then trying to send data to the client before it's asked for. It's doing so by sending what is called a push promise. And then the client may or may not reject that promise. So the client has control. An example could be that you request an index HTML and then the server sees that, okay, every last user who requested the index HTML also wants a JavaScript file, a CSS file and some images. So I'll just send it to you right away instead of waiting for the client to ask for them. This is kind of like inlining. But inlining can't be rejected by the client. So typically a mobile client will reject large images, but the desktop client will receive them right away. So can I use it now? This is the browser support. It's fairly good. The support for Internet Explorer is Windows 10 only and Safari is later versions of OS 10. But it's still all green. The only drawback is that the browser vendor has agreed to only implement HTTP over TLS. So you need a certificate and you need a secure connection. Some embedded vendors think this is a bad thing, but still that's how it is now. So server support. If you're going to deploy it on your own servers, it's kind of good, fairly good support. A partial has it, Jetty has it, Microsoft has it on Windows 10 and Server 2016 and the next has it. So fairly good. There's also a lot of clients implemented for specific programming language like Go, Ruby, Python, Node, et cetera. For Proxys, big IP support it, HA Proxy does not. So that depends on what you're using. And for Cloud, Akamai support it, Google App Engine support it, and Cloud Fair does. Azure, Amazon Web Services and Edgecast, for instance, doesn't and they don't have a roadmap for it. Probably digging too deep into their infrastructure. But if you push them, send them emails, stuff like that, it's going to come in the future. So faster, easier, more robust. It's faster, yeah, less overhead, less TCP connections. You have server push, which can really make it faster. It's easier because you don't have to think about all the work grants we have in HTTP1 like concatenation, image strides, domain sharding and inlining. And it's more robust. You have flow control and weights that gives you really much control. Yeah, thank you. Hello. Hello. Hello, everyone. Welcome to my speech, How I Hacked My Way to NDC. So this story begins in January when I attended Security Day 2016, which had many great speakers, many great talks, speakers such as Niall over there. And at the end of the day, they held a QA session where we in the audience could ask the presenters questions. So before the conference, Troy had asked on his Twitter, does anyone know of a good mobile app or other service to allow people to submit questions during a conference talk? And then a couple of people, they replied with Slido, suggesting that he should use that. And also, the application, the people behind Slido, they replied and said, well, thanks for the shout out and please let us know if you can help you with anything. So this is Slido, how it looks on the big screen, on the presenter's view. And this is Slido, how it looks on your mobile phone. So we in the audience, we were sitting there with our mobile phone, so we can ask questions to the speakers of the conference. So I thought to myself, well, this is a security conference, so it would be really funny to see if I can exploit this somehow, maybe exploit the presenter's machine. And the aftermath of that is basically this. I managed to find a cross-site security vulnerability by entering a piece of code into the speakers, into the type your question field. And this was executed and the dialog box popped up on the speaker computer. So what happened was basically that the audience, they started laughing. Then the presenters, they turned around and then they started laughing. And I lower there, he asked, well, who did that? So I raised my hand and then they said, well, give that man an applause and come up here and tell everyone what you just did. And then he encouraged me to give this lightning talk afterwards. So now I'm going to tell you what I did. As you know, with cross-site scripting vulnerabilities, what the attacker will try to do is to run malicious code on the victim's internet browser. So this here is just the simplest, most basic thing you can test to test for a cross-site scripting vulnerability. So I just started out with a simple script tag with an alert function from JavaScript, entered that into the question box. And what happened then? Well, the thing was a blank question turned up on the presenter's view. No dialog box, no text, just an empty question. So because of this, I thought to myself that there is maybe a filter here that is trying to prevent cross-site scripting vulnerabilities. But the nice thing about the filters is that the OVAS people, they have a cheat sheet to get around these filters. And from my experience, when I do security testing, when you get down to round number 17 on this list, you find something that works. And if you don't find anything that works by that, then you have, this is the screenshot from the whole table of contents. So you can see it's quite long. There's over 90 methods, the methods to get around these filters. So there's something for every taste here. So some examples. The one I use there is on the top, which is basically just an iframe where the source attribute is set to a piece of JavaScript code. And you have to have JavaScript colon in front of it because this is a source attribute. Or the one on the bottom there, which has become my personal favorite. I type this in everywhere, which is basically just an image tag, which has the source attribute set to the root. And this will generate an error. And then you can have your JavaScript in the onerror handler. So for those of you who know Slido, you know that they have 160 character limit for the questions. So how much mischief can you do with 160 characters? You cannot type that much code, right, into 160 characters. But we can get around that. Because what we can do is we can load an external script. So this, we do this in about 142 characters. You could probably get that down even further if you really put effort into it. But this is a proof of concept that was working on Slido. Where you basically, what you do with a JavaScript here is that you create a new script tag, and then you set the source attribute to a JavaScript, which is on evilhacker.com. And then you append that script element to the DOM. So this will basically execute a external script. And now sky is the limit, basically. You don't have the 142, 160 character limit anymore. You can basically load whatever you want. So what happened after this was that, you know, you remember that Slido, they had replied to Troy on Twitter. So he now replied back. And he said, well, so we use the wrap, and now we have a serious security vulnerability to report. So luckily, Slido, they were really quick about this. So they replied with their email address. But by now, you know, this Twitter conversation had become quite hilarious and also quite embarrassing for Slido. But they replied with their email address. So I emailed them with all the details and everything. And, you know, a good end for the story here is that they fixed this quite quick. I mean, this was fixed by the next day. And also Peter Kormick from Slido, he told me that this cross site scripting vulnerability was, because they do escape all the output of data from the users, they do that. But this, but because of some customer, some customization that they had done for a customer, this particular input, when it was output, it was not escaped. So it's basically an honest mistake. And we still think that Slido is taking security seriously. But this brings me over to my last slide here, which is about content security policy. Because this could have saved Slido, in this case, because it can protect you from some honest mistakes. And basically, what content security policy is about, it's a HTTP response header where your web server can tell the web browsers where they are allowed to load dynamic resources from. So in the example there that you see on the middle of the slide, what the content security policy there says is to the browser that you are only allowed to load scripts from current origin or from cdn.example.com. So in the previous example where we tried to load a script from evilhacker.com, the content security policy would have ensured that the browser would know, or at least any sane and modern browser. Also, any old IE users would be out of luck. But any sane and modern browser would then refuse to load that script. So this is not a substitute for other security features, but it can save your ass if you are unlucky. And if you have done some honest mistake. So thank you for listening. Thank you. Okay. Hi. I'm going to talk about React. My name is Jan Torr Sölsik and I work at Supersterea as a full stack developer. I work with JavaScript and I especially like to work with React. React is a JavaScript library for building user interfaces. It was created at Facebook and it was later open sourced. Since it focuses only on the user interface, we call it library and not the framework. But you can use it together with most frameworks that exist. I don't think that React is just not library like we usually say about the latest hype. It does not look that different on the surface, but the way I think and program with React has fundamentally changed the way I create frontend applications. It is more enjoyable to work with. It's also faster, simpler, reusable and testable. After this talk, I hope that you are curious about React and want to learn more and maybe try to use it in your next project or application. There are a lot of frameworks out there and React can be used by itself or it can be plugged into one of these frameworks. But most frameworks you should have a very broad scope and they have some part in it that does the same thing that React does. And many of these parts, they try to solve the same things that React do. For example, make your programming less imperative. So they can do the same thing React can and React can do the things most of the other frameworks can. Therefore, I'll focus on features that are unique to React and make it great to work with. This is the framework of or this is the architecture that most frameworks uses. It's called model view, view model. It focuses on enhancing the internal representation of the web page, which is also called the DOM. So the view is the same thing as the DOM, basically. But the DOM is slow and it has some issues. It's also very bad to work with. So we try to avoid it if possible. The way we create views in these frameworks is through HTML templates. So you create a bunch of HTML elements and you add some attributes to them and that binds it up to a variable in the view model. Now if you change the view, that will automatically change the variable in the view model. And then that will update back down to the view. So you can change the data anywhere you want and the framework's job is to keep everything in sync. Now, at Facebook, as a code-based group, they found it increasingly hard to maintain the code with this structure. Previously fixed bugs kept showing up again and again. And the example they give when talking about React is the chat box. So in Facebook, you have a chat box in the lower right corner and then you also have a notification up in the top. When you receive a message, you should get a notification in both places. But if you click down in the bottom and read the message, it should be updated in the header or if you click the header, it should be updated in the bottom. Or if you read the message on your phone, it should be updated in both places. But bugs kept showing up all the time. So that broke this functionality. And finding these bugs that exist because state changed in a specific order or because of timing issues is a tedious and time-consuming process. So they decided to do something about it. And this is the architecture of React, the React framework or library. And this, we break all types with the DOM instead of trying to enhance it and we abstracted away with this virtual DOM. A nice thing about breaking all the types with it is that you can use it for other things as well, for example, mobile development with React Native. And since the DOM is very slow and bad to work with, Facebook took some liberties and made it faster and fixed some of the issues that we experienced. But the best part by far is this magic that's happening. We don't have to think about the state changing over time anymore. Like the old days when you would essentially refresh the whole page every time something changed. So almost like a server-rendered page, we now need to define our view for any state at any point in time and then React will fix the rest for us. React does this by re-rendering our application every time something changes. React then takes the new version and looks at the old version and finds out what changed and it applies only this to the DOM. Since the DOM is very slow, we get a huge performance boost by batching these changes together and only reading and writing to the DOM when absolutely necessary. Our job then is to create this model and this is done using React components. Now all the design decisions about how you create these React components are amazing. We can basically think of them as a function which only depends on the state of the component and the properties. And then it returns a virtual DOM representation. Everything you write is JavaScript. There's no HTML. You encapsulate every, all the state inside the component and the data only flows one way. So let's look at a simple example or simple component with only an input field and a label. We describe the state in this get initial state method and then we use the state in our render function which returns the view for the value and in this label or span element. Now commonly we have a two-way data flow and we would be done because this, if we change something in the input box up here, that would automatically update the model which would update the label. But in React, the view always depends on the state and never the other way around. So the label won't get updated but the thing I thought was like mind-blowing is that you can't actually change input box either. If I try to write something here, nothing happens. So this is not good. Let's see. To change the value of our input box, we need to call set state with our new state values. This is the only way you are allowed to change the state inside the component and you are only allowed to do it inside the component which means that it is encapsulated inside the component. Therefore, if there is something wrong with the state of your application, you know exactly where to look for the bug. You just open the component and search for this.set state. You never need to worry about execution order or timing issues either. And this reduces the complexity of our user interface considerably. So to fix our simple component, we explicitly set the state when the input box unchanged callback is invoked. We set that up here and then this function is getting called. Now some might think that this is annoying or we can test it first, I guess. Some might think that this is annoying. You have to do this for all the value changes in your application. That becomes a lot. And look at all that code for such a simple component. But I find this very liberating. You know exactly what is going to happen all the time. And especially if you look at old code or other people's code, it's easier to read and find bugs. You also have control of the execution order. And when this component grows very large, it is encouraged to create subcomponents or smaller components. So with React, it is encouraged and quite easy to create reusable components. Everything is JavaScript and all the components are functions and therefore it feels very natural for most developers when you should extract these components because you extract functions in your code all the time, hopefully. It is also normal to extract out the render function into its own stateless function because then the view don't depend on any state at all. It only receives properties from its parent component and returns a view. This makes it extremely easy to test because your views are now pure functions. You only send in some properties and you get out of view representation. And this you can test like in whatever manner you want. To summarize, React is awesome, at least for me. It forces you to write good code and your app will most likely be faster simply by using React without any kind of optimization at all. It is simpler because the state changes are split up into distinct faces and encapsulated inside the components. It is easy to extract components into meaningful reusable subcomponents because everything is functions and you're used to doing that all the time. And functions are quite easy to test. So it makes our application quite testable. I hope you found this interesting and want to find out more. Thank you for listening. Thank you. Thank you for listening. You will fall, you will fall, you will fall You will fall, you will fall.
Einar Afiouni: Organising CSS, what can you do, what we did and what we learned from it We recently had the opportunity to start fresh with CSS on a project, so we decided to take the opporunity to research what methodologies there are out there for CSS, what are their advantages and disadvantages and what we decided to go with as well as what we learned doing so. (We just started, but until NDC we will have enough time to see how it went, mistakes, regrets etc) Nicholas Paulik: How I hacked my way to NDC In January, I attended the conference Security Day 2016 in Oslo, with a plethora of awesome speakers and sessions. During the live Q&A session at the end of the day, I spontaneously tested the Q&A software for cross-site scripting vulnerabilities, causing an unexpected dialog box to pop up on the big screen. Being a security conference, this was met with laughter and applause, and I was encouraged to give a lightning talk. This lightning talk is based around the event, with focus on what made the exploit possible, how some common security mechanisms can be defeated, protection against cross-site scripting and how the vendor of theQ&A software responded. Erling A Børresen: HTTP/2 – What's the deal? HTTP/2 was published as a RFC in may 2015. In this lightning talk I will take a (very) brief look at the history behind HTTP/2, what it is and how this new standard will change and improve the every day work of a web developer. The presentation is best suited for web developers who wants a quick summary of what HTTP/2 may bring to the table, without diving to much into the details about how it works under the hood. Jan Tore Stølsvik: Forget data changing over time with React.js React is a JavaScript library for creating user interfaces by Facebook and Instagram. React brings many good things to the table, but the best part by far, is the virtual DOM. With a virtual dom it has once again become possible to code frontend applications like the good old days, when for each request we simply re render the entire view. In this talk I will explain how this simplifies an application, makes it faster, more reusable and testable.
10.5446/51830 (DOI)
All right. I think we're going to get started. We do have one more panel member coming, but he should be here very shortly. So he may just arrive a bit after the beginning. So welcome everybody. Good morning and thank you for coming today. I hope everyone had fun at the party last night, but maybe not too much fun. My name is Eric Brandis and I am a co-founder of a JavaScript error monitoring company called TrackJS. So today, rather than talking about a lot of the sessions here are all about a specific tool or technology or framework, but rather than talking about that, for this session, what I want to talk about is the business of software. That is taking all of those technical skills that you all have and turning them into something that produces a product or a service that you can sell. Entrepreneurship right now is like a really hot topic. And so the news media, the financial press, everybody is talking about startups, talking about startup accelerators, Y Combinator, they're talking about unicorns, venture capital funding, things like that. And well, those are all great topics. What I want to talk about today actually is I want to dig in a little bit and hear from people that have actually started software businesses and kind of hear their experience. And so to that end, we've got four exceptional people, well three exceptional people in Todd who had to come because he's my co-founder, to come talk to us and share what they know about starting a software business. So the last thing before we get started, I was told that Norwegians are not always keen to participate live during presentations. But I'd love to get your questions. I'd love to know what you want to know. So to that end, if you look up at the slide, I discovered that panelquestions.com was actually still available, so I bought it. And if you go there, there is a glorified text box with a submit button where you too can submit questions. And I'll be checking that throughout this discussion. And so if you have anything you want to ask, just put it in there and hopefully I can work it into the discussion. And if you want to have fun, see if you have any sequel injection. So I thought about that. I did think about that. And so I was going to add a disclaimer that says, please don't go full Troy Hunt on the website until maybe after this, 60 minutes, then you can hack it, do whatever you want to it. Although I did check for sequel injection, I think we're safe. Famous last words. OK. So without further ado, let's get started. So kind of as a question zero for all of the panel members, obviously we've got your pictures up here big, but maybe each of you guys could introduce yourself, give a little bit of background. What is your business? Maybe there's more than one business. And then where did you get the idea and kind of what was the founding story? So let's start at the very end and let's go with you, Ben. Cool. So my name is Ben Hall. I'm the founder of CataCoda. We're an interactive learning platform for software developers. So we want to teach Docker, Kubernetes, lots of the cool infrastructure topics in the browser without you having to download or configure anything. You can just get started and start learning and start playing and experimenting. I started this because infrastructure is hard and there's not enough good ways of learning how to do it properly. It's great having a virtual machine on your local laptop, but how do you replicate network problems and how do you do it across a cluster of a southern nodes properly and see how your applications respond and how to code for various different weird and wonderful edge cases. And so that's what we're trying to teach in the browser and that's why I'm passionate about what we're doing. Awesome. Rob? I'm Rob Connery. I founded a site, TechPub.com, and it was acquired by a plural site, a video training company. And so TechPub, and now I'm actually starting a new one, believe it or not. So yeah, TechPub is all about learning through videos. In fact, we did one with IND, did several with IND. And at the time that it was founded, which was in 2009, there wasn't a solution like that for.NET developers. So I followed on the footsteps of a company called PEEP code. I learned a lot from his mistakes. And that's what, yeah, that's pretty much what we did. Awesome. So my name is Oren Eini. I'm also known as Eende. So the company is called Hibernating Rhinos. The team that we do, we deal with data. One of the things that we do is we have a tool that sits between an ORM and your application and it can analyze the traffic between them and tell you when you're being stupid, you're calling database too often, you're making a query without an index, all sort of stuff like that. And after doing that for a while, we realized that the problem was with the actual database being used. So we built RavenDB, which is a document database, no SQL. And that's what we primarily do now. Awesome. I'm Todd Gardner. I'm one of the other co-founders of TrackJS. We're a JavaScript error monitoring tool. We help you understand when your end customers run into problems on web applications. It's a bootstrap business. We've been going for about three years, built nights and weekends and working other jobs along the way and all of very, very long hours and terrible mistakes. Way to get them excited for what's about to come. That's good. I don't know about you, but seeing a clown sleep on a banana is very funny. No, these guys made mistakes. I don't know about that. So okay, so a lot of us in this room are developers, right? And we have kind of one responsibility. It's technology, it's computers, we understand that. But when you make the choice to go from developer to business founder, you now take on all sorts of responsibilities that you didn't have before. So maybe what I'd like to know is kind of how did you guys make that transition? What was unexpected about that transition and what was challenging about that transition? And maybe Rob, we'll start with you. I think for me the biggest part of the transition was becoming a life coach to myself and believing that I could do it and it was really difficult. That was the hardest thing. I decided to do it and I remember putting up something. I think I showed it to Orin and he said, this is crap. It just looked bad. It was like everything was bad about it. And I just thought, what am I even trying to do here? This is ridiculous. But I kept on and I just kept doing it and kept doing it. And then we opened up and made money and I couldn't believe it happened. And still to this day, I have to look back on it and tell myself that actually it was okay. So I think that was probably the biggest transition for me. Okay, awesome. Ben? I can definitely relate to that. Every week I have to go. Every week I have to question, shall I just go contracting, shall I just get a proper job? And then I look at the job market and go, no, that's just ridiculous. I don't want to do that. But then it's also just self-confidence and also the motivation. It can be very hard to, if things aren't going well that week, it's very hard to keep pushing and keep driving and keep adding what people are asking for when people are going, this didn't work or it's completely not making any money at the moment. People metrics are going down the pan. Those days can be problematic. And then it's just also not being a developer is sometimes quite difficult. Not just wanting to write code, not just wanting to add a cool new feature because it's cool. So what percent of your time is writing code? I probably do 30 to 40%. I probably should be doing 25, I would say. I'm still writing too much code, but I read early, so I can get away with it. I'm also a developer. And then the rest is marketing styles and all of the other associated goodness that you have with running a company. Awesome. Todd? So I find that development versus all of the other stuff is very different workflows. So when we first started, it's fun. You get to just kind of nerd out and you're trying to solve a problem and you're hacking with probably, if you have co-founders, they'll probably end up being like some of your closest friends and you're just like, you're building something amazing and you just want to show it off. And then at some point you get to a point where you have to turn it into something real. You need to start doing, you have to form a legal entity and you have to do accounting and you have to figure out how do I market. And those are very non-deterministic things. It's a very different flow. When you're programming, you get to pick something and you might just churn on it all day. But when you're doing the other part, it's like, I need to spend 10 minutes responding to this email and then I need to respond to that sales call. And then I need to figure out what do I want to do with my Google ads today? And I've got to get the books done for last week. It's just a very different mindset and it's very hard to balance doing the two at the same time. That's one of the, for me, the other thing was that I started and I wrote a lot of code. And then there was, okay, now we have to make this product. So that means that we need the website, that we need a guide, we need a sales strategy, we need a whole bunch of stuff. At some point, I find myself working really hard in Taouik. I look at code on other people's screen and I pass by and say, there is a mistake on online tree or stuff like that. And on my way to a meeting about, okay, what should be the location of our new office or which conferences should we go to this year or what should be the t-shirt that we go with, stuff like that. And there's, they're all important stuff, the stuff that will make the product, will make the company in the end. But also not the fun stuff. This is not let's solve a really complex problem. We solved the complex problem and now someone else is solving the complex problem while I'm dealing with IRS audits. That's a good fun. Kind of riffing off of that. One of the biggest fallacies that we totally bought into when we were starting was that if you build something great, they will come. All you have to do is build an awesome piece of tech and people will discover it and you will be crazy successful and it's totally not true. You need to spend at least as much time talking about your awesome product as you spent building it, probably more. As developers, that's incredibly hard. Chad Fowler, he's a big in the Ruby community and he's a very motivational speaker, super famous guy. He has great line. He says, marketing is a moral imperative. And I just thought that's the strangest thing I ever heard. The idea was that if you have something that you believe in that you think is so awesome, you have to tell other people, otherwise you're giving it for them, which for me, absolutely I can't do. I am the worst marketer on the planet. I just can't self-promote or do anything like that. I just get so shocked and shy with it, unlike Lauren. I don't like talking about it. Honestly, give me a computer, give me some corner, let me have it. That's fun. That's actually energizing, most of the time. And you find myself dealing with, let's talk to 300 people a day on the boot or let's do... Except when it's these people. These people are awesome. Yes, but not 300 times a day. Give me every one of them one at a time. Awesome. 300 times a day. Thank you. I need a break. Literally, I finish the day here and then I go in. And typically by eight o'clock I'm done. And that's not normal. It wasn't normal since I was six years old. I was just so tired talking to people and the interaction and every single person that you talk to, you have to put yourself in those shoes and feel the same excitement. Let me show you why what we're doing is awesome. And at the same time, you have to... This is the sixth time you have done this in the call and break. And you have to do it without feeling... Without making the guy you're talking with, like he's talking to a councilman. Stuff like that. And that's probably one of the most exhausting part of the job. I also think it's beneficial because we aren't natural salespeople. We don't come across as a car salesman. At least that's what I... When I'm working at Expo and talking to developer evangelists, they're not selling. They're just having a normal conversation. And I think we can play with that in our favor and not go down the traditional enterprise salesy route and actually start just engaging with developers and being more evangelists. So do you have any tricks? Do you have any tricks that you use to keep yourself out there and in front of people? What kind of strategies do you use? No, that's why I'm listening to these guys. I haven't got that far yet. My strategy was to get a good co-founder. I could do that kind of stuff. That was mine too, actually. There you go. So I knew you had Todd. Wait, who is it? I was going to say who is it? Well, so, you know, I had a strong dose of liquid courage. I would usually take a couple shots of bourbon and then I would walk up to people like I'd never met, like Rob at like an NDC London, like maybe two conferences ago. I'd never met him face to face, but I had a couple of drinks and I walked up to him and I gave him the track.js demo with my business cards on my hand. Like, here's this thing that I made. I think it's awesome. Here's the thing that, and you just got to keep doing it. You just got to throw yourself into it like over and over again. Is that when I caught you when you passed out? Like I had to catch you? Yeah, yeah. Like it was just, it was too much for me. You got that a lot done. It was the bourbon thing. Anyway, it's awesome. Well, so, okay, so you've decided that you want to become a business founder, right? And all the responsibility and late nights and not fun things that go along with that, apparently. So how do you fund it? So like, do you work at the same time? Do you just quit your job and hope for the best or what was your plan? Todd, do you want to start? I did not have a trust fund or a savings account or anything like that. I worked. I was an enterprise consultant. It sounds, at least how Ben sounded like, that that market's kind of tough. It's not so tough where I live. There's a lot of very big companies in Minneapolis that like setting large amounts of money on fire. And so we take advantage of that to some extent. And so there's plenty of work as a contractor there. And we kept doing it. We worked 40 hours a week. In a way, it kind of fed into it. I was working as a JavaScript consultant and building error monitoring tools for every project that I would build over and over and over again. And we decided to do it on our own. But we couldn't stop working at the time because I have two kids and a mortgage and expenses and bills and things that just need to be paid. So we just decided that we were going to work 60-hour weeks. So Tuesdays and Thursday nights, I left my regular work, grabbed a bite to eat, and then went to track JS work. And then I worked another four hours on those nights. And that's how we got through the first year, year and a half, just pushing through as much as we could. We'll still try and make money. So I quit my job in February of 2008 and became an independent consultant. If you remember, 2008 was a great time for the economy. That was also the same time that I did the following thing. Quit my job, got a mortgage, and had to pretty much make up business for scratch. And for the past three years or so, my main income was almost, in the beginning exclusively, from consulting. Now I think you have a lot of local people that you can work with. My approach was that I would find a client, typically somewhere far, far away, and I would go there and spend a few weeks working and then come back home and then go back there again, stuff like that. Primarily because there is an inverse, there is a direct correlation between how far away you are and how much you can charge. I don't know why. I mean, what would expect? Maybe we could like, we're about as far away as we can. Maybe you find local clients and I'll find local clients and we'll refer them to each other and double our rates. I don't know why, but the out of town expert came charging me just for being out of town. And at some point I realized that this is not a sustainable way to live. We've been on the road like 80% of the time. And I started saving pretty much everything that they had and walking. I would go to the customer and then work for eight hours and then go back to the hotel and work for another six hours every day for close to two years and saving everything that they could until that point where I had enough money to pay the salary of a forced employee, a forced employee for like three or four months in cash. As in I don't rely on a future cash flow to account for someone's salary, especially not at that state. I'm sure your employee appreciated that as well. Yeah, it took me a year together, but that's much better than having to go to someone. You know, your salary, yeah, I've got to happen this month, year, whatever. And that cushion help a lot at some points will take some time to the customer base and things like that. Absolutely. Rob? Yeah, this is where the life coach thing, I don't know if I realized I used that term and I'm not sure if you guys know what I mean, but it's just the people that you hire that tell you that you're awesome and you can do it and all that stuff. This is where I had to kick in for me because I remember I left Microsoft and when I joined Microsoft they give you a chunk of stock, you know, as a thank you for joining or whatever. So you're supposed to not touch that, that's the idea. And I remember the day that I looked at that and I said I need to live on that and am I going to do this? And it was going to buy me three to four months of time, otherwise I'd need to get a job. So I cashed that in. The hole. What's that? The hole. Yeah, well, so I cashed that in and I remember thinking, and this was incredibly important for me. It's all or nothing. You can't go halfway because that would be the worst possible thing because you'd lose the money and then you'd end up with crap. And so, yeah, I cashed in my stocks and then I built almost everything out by month four and then didn't quite get there so I had to start taking on contract work, which is okay, but then I did what Todd was doing. And I would sit down with my calendar and then literally block out time so I could visually see I'm doing my contract work up to this point and then from here to here I'm doing this and then playing with my family the rest of the time. And it was rigor and we finally got three after six months, we got to a point and then launched. So what did your family think during all of this risk and stuff? How was your wife okay with that? Very supportive. Very supportive. Yeah, she didn't like the stress and it's hard to deal with the stress, obviously. But you get to a point and it's very motivating. Once you can see the light at the end of the tunnel maybe? Yeah. Well, it's funny because I'm doing it now again because I was doing videos for Pluralsight for a while and they went into Enterprise and so I don't do that. So I'm doing my own thing again and I'm having to do the exact same deal. And so I literally had enough saved to get you by for six months this time, which is good. And I'm staring it down. In fact, I was just sitting at my hotel and looking at my calendar like, oh god, I have two more months. Two more months. And it's all or nothing. That's Red 4.io that you're talking about. It's pretty cool. I mean, I went out to it and looked at it. No, thanks. It looks pretty interesting. Thank you. Ben? So I left a previous startup which I co-founded and I left out without any savings or money or anything left in the bank account. So I had to go contracting. So I took on every interesting client I could find. I've still been picky, but if it's interesting work I take it. I do a lot of training courses on the side. I do a lot of extra work in the evenings to counteract and try and double how much I could earn by doubling my hours, which isn't the most sensible way. It's the easiest when you get started. And then that helped me to build a good client base and then I continued doing that for a year. And at that point, when I started I didn't have an idea of what I wanted to do. I just knew I had to do contracting. And that's when I had to start learning new technologies because I had taken a step back from coding to do the other startup. Going back to contracting I had to relearn things like Docker, all the new ways of doing React and all the new web frameworks which I hadn't touched in a while. And so that's where I thought, well, there needs to be a better way to learn these things. Why don't I just build it? And that's where Catechoda started to be born. We went for about three different versions of it before it became something which actually made sense and were the same products and were like a coherent view, like what you were saying, like you show people and go like, no, they're ridiculous. So you go through those iterations for a few times and then you eventually come up with something like, you know what, they feel kind of not bad. And I'm happy with not bad. That's a good start to play. And then, yeah, so now Catechoda is my main focus but I'm still doing training on the side and still doing work with clients because I have to keep that way away going. Like there is a deadline. It's a great motivator to keep doing additional work and then conferences like this also help build new relationships and hopefully find new clients and more work and that keeps many in the bank which keeps me doing what I love doing. It's a virtuous cycle. Okay, so we've all started software businesses here, right? Which means that there's a really heavy technology component to that business. And as software developers, we've probably all made coding mistakes in the past. What was the biggest technical mistake that you made or any kind of technical debt that you incurred while building the business? Rob? Oh, wow. It's simultaneously the biggest mistake and the best move I made. Isn't that weird? I used Rails. It doesn't scale. It's got WebScan loop. No, it was interesting. I used Rails and I got the business up fast. I did. I got it moving fast and that helped. The mistake I made was staying on Rails and I'm not here to hit on Rails. I swear even though it sounds like I guess I am hitting on Rails. What I ended up with when I sold the business was a pile of crap in the database that just was not good. And I had kept thinking, I will make amends here. I will pay attention to this and go and fix this data. And thank God I had all the logs and the payment and everything. It's weird. You know, looking at it, I can't put my finger on exactly what went wrong. But when PluralSight came knocking, we went through due diligence and all that stuff, they asked for about 50 different reports to show all kinds of customer movement and churn and retention and whatnot, as well as financial stuff. And whew, I just did not have that data because I wasn't thinking long term reporting. And that's the sad part because that's where I come from. That's my background, my database and analytical person. So I should have sat down and built a really nice database and gone from there. But I didn't. So that's my problem. Do you think if you would have built that really nice database up front, would you have wasted too much time? Yeah. Or would you have? For sure. There is the opportunity cost and there is also in use, it didn't stop you from selling the business. It might have made the... Yeah. I mean, it's... I would have made more money. That's the problem. But you're right. I probably wouldn't have sold it at all if there was no business. Yeah. It's a trade-off. That's right. For me, it wasn't... I don't think I've hit anything so far. I probably haven't. I'm just not aware of it yet. But in my previous startup, I was doing something which actually turns out to be very similar to TrackJS, which is how I met Todd in a service place. And that biggest mistake was spending way, way too much long and building the database and trying to make a scalable solution out the box. And we just spent... Well, I spent like three or four months trying to figure out how do you get Kathandra to work at scale? How do you do hate space? How do you do addupe jobs? Because people will need this. And I was like, four months in, had it working? No, I wanted it at all. And it was just like, wow, okay, that was the waste of time. And hit the big delete button and came over. Just because I'm focusing on the wrong problem. I'm focusing on being a developer and trying to build it scalable and not actually building the business. And that was a fundamental mistake, which is, I'm upset if you focused early on on building that database, maybe it would never have got off the ground in the service place because it would have been from some big, overweight, heavy, which you would have had to maintain and you would have... Yeah. And by the way, I admire the fact that you're not telling us that we should have used Raven. I know it's just sitting right there. It's keeling me. Well done. Well done. So, Todd? So, I think there's this... Well, we kind of did an endless cycle of failure. But it was kind of intentional. So the first version of Track.js was an ASPM VC3 app talking to a SQL server that were both on the same machine that happened to be under his desk, just in his house. That was version one. And we launched with that. And that worked great for a few test accounts. And then when we signed our first real customer, everything melted, of course. Just nothing could do anything. Once you get a real amount of data, it couldn't handle it. And so, arguably, it was a mistake to build it like that. But it got it done. It got us those first customers. And then we went back and we had a couple of very painful long nights where we rebuilt it and we actually moved it into Microsoft Azure. And that was good. That helped us grow to the next phase. And then we got to a point where Azure didn't work for us anymore. We couldn't push through the performance we needed. It just didn't do what we needed to do anymore. And so then, arguably, that was a mistake. But it did get us to that next phase. We had to have a couple more very painful conversations where we ripped all of the stuff that we had, quick and dirty, just called Azure directly from our core pieces. We had to painfully rip all of that out and relearn how to do it with dedicated hardware. And so, arguably, that was a mistake. But it got us there. And so, I can't say that any of those were a hard and fast mistake because it was like the cheapest, easiest, quickest way to solve the problem that we had and get us to the next phase of growth. But it did create work downstream. But how much, how expensive was it compared to having a solution at hand for that timeframe? See, I feel like it was worth it. In retrospect, I feel like it sucked at the time. But I feel like it was worth it because if we had tried to build the solution we have today at the beginning... You don't have the information. No, we couldn't. We wouldn't have known the problems we were actually going to have. And it would have taken us way too long. We would have missed market windows and people would have been able to jump up and compete with us. And it would have been really, really hard. So one of the things, the problems I had, nobody cares about how clean your architecture is of your startup. Like, there's a quote, I forget, maybe it's Peter Thiel or something like that. If you're not embarrassed of your launch, you waited too long. You should just release this embarrassing pile of crap. And like, does anybody want it? Somebody pay me, please, pay me, give me some money. See, that's interesting because I'm actually doing the exact opposite. Okay. I'm doing no, I'm wondering, it's an experiment because everyone talks about MVP, right? Minimum viable product, you get out there and then you know, improve it as you go. That actually cost a ton of money because I had to keep rebuilding tech pub. I'd rebuilt it three times each time with Rails and I stopped producing content and that cost money. And so I thought, well, this time I'm going to take extra time and make it really, really looking slick. I want it to come out looking great and have that, have the dual purpose of marketing the site and then also I feel good about it. Are you talking about looking feel or are you talking about technical behavior? Both. Because I will agree with you on the look and feel that's important. I guarantee that in a year you will have deep technical issue with the website regardless of how you design it right now. Yeah, you know what? Did you build it on Rails? No. Part of it's just like second system, right? Like you've done this once before. Like you know a lot about this kind of like if I go on from TrackJS and I build another tool in the monitoring space, I know a lot about it. I would build the next one a lot different than we built this one. And so you're doing software as a service or I'm not. So I guess that's a luxury I have that I could do that anyway. So, Orin, what's yours? Is it eSent versus more lines? No. It's the most effective at the time. By far the most expensive technical mistake we made was Silverlight. I remember that discussion. Yeah. And now he mentions it. In 2011 we made a decision that we needed a really good studio for AVENDB. At the time using Silverlight made sense. It meant that we could use a lot of the same skills that we already had. And we decided to let's go ahead and build a studio in Silverlight. Fast forward a couple of years. Microsoft continuously mishandled that. And eventually by the time 2013-2014 come around we realized that we've spent two plus years of development time in the studio. And now we have to rewrite the whole thing as an HTML5 app. And that effectively shut 3D developers for six to eight months. Just doing that. Which effectively has... It's funny because the move to HTML5 was such a big thing for our users who hated Silverlight that this was one of our selling point for the next major version. But on the other hand saying we used to be so crappy that now we are on a baseline level. Upgrade is not something that I really want to have to say. So that causes tens of thousands and a lot of dev time that could have been spent doing whole bunch of other stuff. Absolutely. So we've got a couple of good questions from the audience actually. So thank you. There's some of the questions around a little bit more about funding in terms of what are your thoughts on investors or investment. I mean even if you bootstrapped to start, did you guys ever take any outside investment? Or if you didn't, why not? I'd love to tell a story about that. So we're incredibly naive about how business worked when we started. And so we had dreams of Silicon Valley billionaires just dropping a bail sack of money. Here you go. You have a great idea, great team go do this. That's not really how venture capital works in the rest of the world. But so we had this great idea that we were going to do JavaScript air monitoring and this was going to solve this major problem that all of these big companies had. And we put together a pitch deck and we went around to a bunch of different investors. And we gave a pitch. And we kind of got the same response from all of them. And it was like, yeah, this is really cool. And I think you're going to make some money, but I'm not sure. And maybe come back after you hit 10K MRR and you have some customers. 10K MRR is kind of like a magic number in software as a service where you're having $10,000 of monthly recurring revenue. It's kind of a number that means that there is a legitimate market behind you somewhere. And so they're like, you guys got to go hit that and then maybe come back and talk to us. And that was very depressing when we got that rejection over and over again with that same message. And it almost sank us, frankly. But we decided to keep pushing because we'd already invested a lot into it. And in fact, we actually were able to hit that number without too many more months of effort on our own. And then those same investors started coming back to us and they're like, hey, Tracture is, let's do some things. And we kind of like, why do I need you now? We hit this number. We have money coming in and we all have jobs. And now this money is just coming in and we're growing 10% every month. And why do I need you? Why would I work for you now, for you to be my boss and sit on my board and all of those other things that go with investment? When we could just own this thing, we're growing fast. We don't need to be a billion-dollar business. We're fine with being a couple million-dollar business. You don't need to be a unicorn. You don't need to be a unicorn. There's a way better chance that you can build a sustainable lifestyle business that can support you and a handful of other people than that you're going to totally disrupt the market and destroy everything when really you're probably just going to go down in flames a couple of years later. And I might be a little disgruntled over the whole positive. Wow. Lifestyle businesses. So, Todd likes VC is what we got from that. That's good. How about you guys? And multiple pictures, Ben and Sir. There goes the investors knocking on the door. Hi guys. How about you guys? Orin, Rob, Ben? Yeah, so I explicitly started my business, so I would have a place to work and have a place to do something that I loved. Apparently, that doesn't work actually on the business. So, you have to do a lot of other stuff that you don't love. So that was miscalculation, but the whole idea was that I want to retire from this place. So I kept thinking, okay, I'm building a business for the next 40, 50 years. And everything that we did was, okay, I don't want to have to meet with investors and have other people batting in and making demands and all of that, especially because they tend to have very different priorities, very different schedules. It means that I get to not have, you talk about 10% growth. For any regular business that's awesome, if you have this, that's horrible. You should be at least at 60% growth, and that's the minimum. So you should do more stuff to get to that 60% growth until you get the billion-dollar business and then they just sell you. So there's a lot of that. The issue of control, the issue of sustainability, the issue of I don't want to have to go around and have fights all the time about the direction where the company is going and what do you want to do and how much hair do I want to have at the end of the whole thing? That's the problem with VCs. They're going to want their money back at some point. No, they want 10x money back. That's the problem. Right. Even if that means forcing you to like Aquahire sell yourself to another company, right? That's right. We're not going to make that 10x back, so they're going to make you sell to Apple, Google, Microsoft, Amazon, whatever. This is it. I have my own interesting story on this. With TechPub, we were lucky. We hit that number on our first month, which was amazing. I could not believe it. James and I were just like, whoa. A lot of that money went out to our authors, so we weren't able to sustain ourselves. This isn't about that. When we started TechPub, I got a call from Aaron Schoenert. He says, hey, I hear you're doing video stuff. Aaron is the CEO of Pluralsight. They were doing just on-premises training at the time. He said, we're going to get into that too. Why don't you guys work with us? I said, psh, not a chance. I think they're valued at $1.2 billion right now. It was a race between me and Pluralsight. I remember James and I were like, yeah, game on, because they're the only other ones that are getting into Microsoft video training stuff, which is what we were doing. They started taking funding, and we said, we're not doing that. Guess what happened? They ate me up. I mean that in the best possible way, but I really enjoyed working there. That's a very serious thing. The thing about it is, when they took their series C, I think it was, $400 million or something like that funding, some big number. Big number. I remember looking at that saying, wow, they are going to have to do something to make that happen for their investors soon. Then now I'm watching their shift, and I'm thinking, I wonder if that was made. Which board members made that decision? For them, I'm sure they're happy, and I'm not knocking them at all. It's just not where they started. I kept thinking, did I dodge a bullet or did I not? What would you do if you, same thing, knowing what you know now, which way do you go? The end of the story is, I have a number of little business ideas that I'm thinking about, and one of them involves going to get a seed round and going from there. I just keep thinking, what's going to happen in five years time? Where am I going to be, and am I going to be okay? Lauren said, when they say, we need our money, so you're going to do this, or yeah, we're going to start courting you. Did you acquire it? And I just thought, well, no, I'm not going to do that. There is another issue. Because a lot of the VCs have, you mentioned MRR as one of the metrics, but in many VCs, the metric that they care about is something like, okay, how many new customers, what is the growth stuff like that? And it's incredibly easy and tempting to start throwing that money to chase those numbers. So if I want to show that I'm growing, then I need to show that they have increasing month-over-mount downloads and installs. So okay, I'm going to hire the 10 people in this audience, and I'm going to tell them, okay, your job now is to go to every single user group in the country. And each of you is going to give a talk about the product that I'm selling, and each of those, like every day you're going to give a talk in tens and hundreds of user groups. That gets the world out very fast. That gets you down, it gets you stuck. It doesn't actually improve the product except that it improved the people you're familiar with. And it's very easy to burn through quite a lot of money in this manner and end it up with a huge amount of taking a debt because you're going after the sexy stuff and not going after all of the taxes that needs to be paid. In the database world, that means that, oh, look, I have a really good benchmark, and it tells you that you're writing to memory or doing something like that, and you don't have good monitoring support. You're talking about vanity metrics. Yeah, stuff like that. You're talking about going for eyeballs instead of going for profitability. Yeah. But for some companies, that's not a bad thing. Like, do you think they have their place and that's to help you grow and help you scale? If you need that, if you have a business idea that requires that money, if my idea requires me to have a team of 10 programmers for a year, and otherwise it's not possible to do that, absolutely you need that money. But at the same time, you have to consider, OK, do I have other alternatives? Can I do? Can I give out stocks? Can I take you on from the bank? By the way, taking you on from the bank or mortgaging your house tends to be a lot less stressful in the long run if you're actually successful. Did you do that? What? Did you, is that, I mean, did something? Did you take a second mortgage out? No, I got a forced mortgage and then quit, and then I was busy paying the mortgage and then going to business. But I mean, that's a good point, Ben. Like it's different kinds of businesses require different kinds of growth. So like, Trek.js was like a monitoring software as a service business that had like a niche, and it didn't need VC. Like it was, in a way, like the VCs were right to laugh me out the door because like it wasn't a good business for them. But something like Uber couldn't have happened, it could, you couldn't bootstrap Uber. It just required so much scale before it could have even potentially been successful. But like, it depends on what kind of business you want to start. And then for me, it's all about valuation. If you go in really early with just an idea, they are going to take a large part of that and not give you much money for it. If you go in with 10K revenue and it's proven business model and it's growing and it's scaling, then they're going to look more favorably and you're going to get a better deal. And you've also, you are de-risking the problem from both sides. They know that it's working, they know that it's growing, so it's less risk to pump money into your company and help it grow with you. And I think that's, for me, that's the viewpoint is like, don't take money out of start, try and find other ways to bootstrap it until you are generating and then make a call. How quickly do you want to start? So you'd be willing, you'd be open to investment at some point? It would open doors and opportunities which I wouldn't have if I was bootstrapping. So for example, bringing on additional authors and growing that out into very similar ways that Pluralsight have done because they have the cash reserves that they can give a lot to back to the authors and advances and stuff like this, which if you're bootstrapping, you can't do. And so it makes it more attractive, they have more content, they can sell it to bigger, more enterprise companies and as a result, they get valued at 1.2 billion. It's easier if you have money to generate more money. Yeah, it does seem to be the way of it, doesn't it? It's kind of that unfortunate paradox. So let's go back to the technical side a little bit. So we talked about kind of the biggest technical mistake, maybe, how did you guys change as software developers? How did your stance on software development change after doing these businesses for a while? Ben, you're laughing so I'm going to... So I used to be very into test driven development, unit testing, clean code, software craftsmanship, and that all went out the window in search of startup. That just completely disappeared. And so that's changed. And my way of approach, where my focus is when I'm writing code, I don't necessarily write unit tests for anything. I don't like... Me neither. Like I find out to a customer, does this work? If they're meeting your requirements, and then that's my focus, not did it make everything green? And that's where I had to do a big mind shift. And sometimes bugs get introduced and that's a problem, but at the same time I get stuff to market more rapidly and I validate whether it's actually important or not. What's the point in having beautiful, clean code if you're not making any money? So you need to make this trade off. I know you'll have the opposite view at the moment. Oh, I see what you mean. Well, I mean, it's funny. I kind of... Yeah, I don't pay as much attention to unit testing as I did, but I still do it. But I've actually... It's so funny. I'm so scarred from my experience. I'm all about databases now. I'm all about... That's what burned you and so that's what you're scarred from. Yeah, no, I picked Postgres and I said I have to get as fluent as I can in this thing. I wanted to think in Postgres. And yeah, and so that's what I've done. So you just have this beautiful normalized schema? No, I don't. It's a long story. I have all the data I need now. Every bit of it I have. I guess that's where the focus is. You're focusing on making sure that certain things that are in place, like the metrics are there because that's more important than having the unit testing. You can use those to then judge how your system is performing and you have performance metrics so you can judge if you're having the right impact on the code base or not. And that's where the metrics are now using to judge whether I'm doing it in the right way or the wrong way. Yeah. When we are at a certain scale, we'll probably then go like we now need to stabilize and improve this. And then we'll have time, we'll have resources, we'll have the effort to hopefully do that. But obviously, adding unit tests after the fact is always interesting. But that's a different problem which we'll be solving at that point. We'll be using the volume-based prioritization method for that. So if you have a customer that shouts that they need that, you do this. If not, then no one cares. From my perspective, the biggest technical difference is that I don't care that much what the code is currently doing. I'm mostly looking at all of the taxes that it has to be paying. And by taxes, I mean under fellow conditions, how is the overhanding goes? Because it goes to the log, when it goes to the log, is it, if it's an important, we run out of this space. So it's not just writing to the log. It has to show to the users and an email, almost stuff like that. We have a customer that deploy us to 1.5 million machines. And at that scale of things, you see that, okay, this hard disk may start to just decide to lose a right or just you have a read that is hanging and a whole bunch of other stuff like that. You start, your code start getting, okay, I need to handle these cases that never happen on hardware that you buy for more than five bucks at Walmart. Honestly, I think they went to the trash and got some hardware in there. And that's what they run on because they do very strong stuff, but that's the kind of thing that really matter to us now. Because when I'm looking at our cost, every time that the customer open a support call, that mean one to three people has to sit on that support call and figure out what is wrong, who's fault is that, how to fix it and need to get up a patch and maybe a patch multiple versions and customer stuff like that. So we spend a lot of time and effort just paying all sorts of taxes around operations and manageability and things like that just to reduce that cost. I would echo a lot of what I've heard. So before Track.js, I did a lot of enterprise consulting and I preached a lot of the clean code and TDD and unit testing and a lot of that did go out the window once it was about my money going out. When it's my credit card that's paying for the infrastructure costs, there's a strong motivation to go as fast as you possibly can and really focus on what you think is the most important thing in software. But testing overall, I think just shifted to where I saw the risk. I had a broader perspective of software. And the risk was, is anybody going to buy this? Does anybody care? What are the real things that are going to go wrong? And I spent way more time testing those, which kind of turns into monitoring. I spend less time actually writing build time tests and more time like you were saying, making sure the logs go to the right places. I have tons of analytics and front end stuff to know, what are my users clicking on? What do they do? If I launch this feature, does anybody care? Does it drive signups? When we blog about, we just launched feature X that does this amazing thing. Did registrations go up? If they didn't, is it because of normal internet cyclical variations? It's a very, I spend a lot more time measuring what is happening than the actual technical implementation. I think that a lot of that is, where does it hurt? As a developer, writing enterprise software, it hurts me if I have to go back and get a bug from QA and restore all of that state and fix that bug. As a business owner, it hurts me if users aren't registering. So that's what I'm looking at working at. For our case, support calls help us because they take a lot of time. So we focus on reducing the part that is painful, that reduces the unreliability of making decisions. Avoiding pain. I started getting close to time and so I want to give everybody one last shot to say, if you were giving advice to someone who is about to take on this role or you were going to do it again, obviously in your case, Rob, you are doing it again, is there anything that you would tell people before starting a business that you want to say right now? We've got about two and a half minutes. So if everyone could go fairly quickly. I'm going to suck up to you and I'm going to say have a co-founder. There are a lot of times when you want to quit. Then life gets in the way and somebody says something bad about you on the internet and you feel terrible about yourself. And you need somebody to pick you up and help you push through it and not quit. That's incredibly important to be successful. On the other side, if you don't have a co-founder, it's really motivating to know that you won't eat if you won't continue. But what I did before I started was everyone will do is sit down and throw out the absolute worst possible scenario if the business is failing. And that means that I'm going to lose every bit of money that I put into that. I'm going to be in problem with the bank, with the mortgage, all of pretty much everything that I can think about. And then I try to see how I can make this less painful if this happens. And when I knew that if I failed, I would survive this, I wouldn't like it, it would be painful, but I would survive this. Then I was certain that I can go and do this. I like to joke that I got the mortgage and then quit my job. I had enough money set aside for three to four mortgage payments dedicated to that. So that wasn't an issue. I could have found a contracting job, I could have become an employee and pay that, and that was my fallback plan. And if you have that, that's important because that means that you can. Okay, nothing bad will happen. This is where I draw the line. And I fail. I'm good. Now let's go find a job. I would say find your motivation. And what I did just a month ago is I wrote down sort of a future journal style. What I saw, you know, for me, this is what's going to happen. And then what I always tell people is if you want to do something, you have to make the choice. And once you make the choice, then you pull yourself to it. And for me, getting motivated is the hardest thing. And I literally talk to myself every day. And so I echo exactly what Todd said. If you're having a co-founder motivates you, raw, but find that motivation. Yeah, I agree. Find what you love, find what you're passionate about. Start up to way too hard and so much stress and pain to do something which you don't fully believe in and fully passionate about. And if you're not passionate about it, like change it. Like change the idea, like move on to something else, try something different. Find another way of working. But yeah, just keep yourself passionate about what you're doing. And if you're not, figure out a different way to do it. Awesome. Well, thank you guys so much for coming. If everybody could give them a huge round of applause for taking time today. And I'm sure you guys will be willing to talk to people after this too if there's questions. So cool. Now I've got to get these guys to get the room back to normal.
Are you tired of building someone else’s product? Are you tired of building yet another business application? Are you itching to build something amazing? Join our panel conversation and Q&A with real developers who’ve started and built software businesses. Let’s talk about the failures and successes, the fears and arrogance in starting a business. Bring your questions and let’s startup. Panel: Eric Brandes, CoFounder TrackJS. Todd Gardner, CoFounder TrackJS. Ben Hall, Founder KataCoda, Rob Conery, CoFounder of Tekpub. Oren Eini, CEO at Hibernating Rhinos.
10.5446/51832 (DOI)
All right. Welcome, everyone. Thank you so much for coming and joining me. Today we're going to be talking about Phoenix Channels. It's a distributed PubSub and Presence platform. And we're going to be talking about Phoenix today. Phoenix has some other features, but we're going to focus on some of the more real-time, soft real-time system that's provided within Phoenix and get a little insight into some kind of academia computer science research that was implemented inside Phoenix to provide some really nice functionality. My name is Sonny Skragan, one of the core team members on the Phoenix framework. And you can find me around the Internet. I go by Skrogson. So if you want to follow me on Twitter, I can tweet out some links to various things that I talk about during this time here. And so I also have a bunch of different Elixir libraries that I offer and maintain as well that you might find useful. So really my goal today here is to really convince you to take Elixir, Erlang, and Phoenix seriously and to really consider using it in your stack at work. So what is Phoenix? So Phoenix is what I like to call a distributed web services framework. So what exactly do I mean by that? Well, the web itself, right, we have this idea of web frameworks are generally all about this request, response kind of thing, right? And they do that pretty well. And most of them all kind of provide that, otherwise it would be pointless, right? But the web is evolving, right? Over the last few years, we've seen that web technologies are transitioning to a stateless web to a more stateful web. And so that provides more efficient ways of communicating with clients and servers. Or servers and servers too. And so it's no longer about just request and response. We have web sockets if you are in a modern browser. And then we also have HTTP 2 that's kind of on the works right now. So in the future, we're going to be really able to tap into this kind of connected web services. So, and that's really what Phoenix is trying to do. We're trying to reinvent the modern web framework. So we're kind of reinventing what it means to be a web framework. So Phoenix is written in Elixir, which is a wonderful, beautiful language. It's a functional programming language. And it is basically the syntax itself is very commonly compared to Ruby. But that's about all the Ruby influence that it is. Just some syntactical kind of things on top. But one thing that's really nice about Ruby is the community and the tool sets. And so some of those things have been really inspired by what Ruby has. Elixir also has a really incredible, powerful macro system that allows you to do metaprogramming. And so we've actually taken advantage of that within Phoenix to provide a really nice DSL for routing. So you can build up your routes in a very nice way and express things in your particular domain. And so, and of course, one of the really awesome things about Elixir, I think that's probably the most amazing thing about Elixir is the fact that it runs on the Erlang virtual machine. And so we've heard a lot about the Erlang virtual machine this week during various talks. And so I actually had like probably 20 slides that I was going to show how the VM works and all these really cool things that Erlang provides for us. But I figured, you know, there's already been a lot of talks, so I just stripped those out and we'll focus today on Phoenix itself. But for those of you who haven't seen those other talks, we can get just a brief introduction if you don't know about it. Erlang was developed at Ericsson about 30 years ago and it powers pretty much most of the world's telecommunication systems today. And if you know anything about Telcom, we really rely on our phones and we want to make sure that those things are up and running. And so Erlang provides ways to build fault-tolerant systems that don't go down. And they run for many, many, many years. And so with Erlang, we can take advantage of all of what the VM does and let it do all the hard heavy lifting for us. You know, so Erlang has wonderful failure semantics that allow us to respond to failures really nicely. And of course, it's also a distributed programming language. So anyone see Joe's talk yesterday? That was awesome. I had a really good time. But it's all about distributed programming and distributed computing. And Erlang has those semantics built into the language and so therefore does Elixir. And it really makes building these systems really enjoyable and it makes it a lot easier as well. So as I said, Erlang is a functional programming language. It's based on immutable data. It's got actor-based concurrency model. And it also has that preemptive scheduler. So it allows processes, Erlang processes to be scheduled across all of your cores. So back to Phoenix. Phoenix provides you a way to build web applications, as I said before. And so we have the ability to write the kind of stateless things that most of us are probably familiar with today. You know, building JSON APIs or CRUD applications, kind of standard stuff. And of course, those are needed. Regardless if we have the ability to build kind of more stateful stuff, we still need to kind of drop back in to provide some kind of synchronous APIs and stuff. But where Phoenix really shines is in stateful services. So persistent connections with web sockets or any kind of TCP protocols where clients can connect to your server. So it's more efficient because as we already know, in stateless kind of systems, you have a lot of overhead every time you want to, you know, check to see who's actually requesting this thing. Because we don't have any way of knowing other than like using cookies and sessions. So and of course, because it's built on Erlang and Elixir, we get the nice ability to build distributed systems. So let's talk about stateless for a second, just really quick. Essentially, we are probably all familiar with this. You get a request and you do a response. So this is really inefficient. Every time you get a request, you have to get some cookie value and find a user or something like that. And then you keep going to the database on every single request. Parsing headers and all that kind of stuff takes more compute power and time to do those things. And we have better ways of doing that. But as I said, this is still necessary for some things. So in Phoenix, it's an NBC kind of architecture. So you have your domain model where you get a request. We parse out the route and we kind of choose which controller is going to be routed to based off of some routing information. And then that controller might go off and talk to your domain modules or whatnot and might do some data access and anything like that. Get that data back, present it to the view. It will give us JSON or HTML representation and then we can send a response. Pretty standard stuff. But stateful is where we have event-driven or message-driven design. And this is really where I think Phoenix shines. And we have this abstraction that we call channels. And this is an abstraction around PubSub. So we can subscribe to particular topics around the system. And when those things happen, our PubSub system will push it out to all connected clients. And we have this kind of thing. So persistent connections between the client and the server. And so as I said, we work off this idea of topics. And a topic is essentially just any kind of identifier. Usually it's just a string, right? And by convention, it has two parts. So it has the topic here, kind of the main topic. And then we also have the subtopic. And the subtopic, this kind of way that you can use these works really nicely with the way that the message routing works. So you can actually have this room prefix here works really nicely with how the channels work. So we can have this channel. And we can say that anything prefix with room and colon here, we have this asterisk, which is a wild card that says, route anything with the prefix of room to this room controller. And this allows us to have buildup handlers for that. And so this is the server API for channels. So we have, when we work with channels, we have a socket, right? So we plug in our socket into our web server. And we tell it that anytime you see a route, a message is coming in or connections on the slash socket endpoint, I want you to route through this user socket module. So our user socket is the module that is going to be responsible for maintaining the connection and connection state and things like that. And it's got a couple things to note in here. So this is, again, the routing information, the routing messages to channels. And then we also have the transport. So the transport that we ship with, we have a transport for web sockets and we have a transport for long polling. So if you still have to maintain applications for customers or users that are using old browsers that don't support web sockets, we have long polling as well. And these transports, you can build your own. Say if you have XMPP or AMQP, all these different protocols that you want to use, you can actually build your own transports and have them connect to your web server. So a couple callbacks here, we have the connect and this receives some parameters for when you're actually connecting. So you might have like a user token that allows us to understand what user it is that is trying to connect. And then you can just return this OK socket tuple to allow the connection to complete. So the channels themselves, these modules, have some different callbacks that you can implement. And the main ones here are join. So you have, when you're joining a particular channel, what you're going to do is tell it, the first argument is the actual channel name. And you can see here we actually have two different join functions. And in Erlang and Elixir we use pattern matching to route these things. And so explicitly on the first one, we're routing to room colon lobby. So that one will be chosen automatically by the VM if we're trying to join the lobby channel. And so in this case, as a demo, we're saying that anyone that wants to join the lobby can go ahead and join. It's no big deal because we want people to communicate and they don't have to be authorized or anything like that. But the second one down here, we're saying we're matching on the room and we have some subtopic. So in this case we can maybe say, well, we want to check to see if that user is actually authorized to join this channel. So we can call up to some service and say, hey, is this user a member of this particular room? And it'll return back, you know, boolean, true or false. And if it's true, we can return OKSocket, which will allow the connection through. Otherwise we can return error and a reason for unauthorized. So we also have the message for handling incoming messages. So whenever I send a message on my channel, it's going to come to this module and it's going to hit these callbacks. So handle in, handles incoming messages. And the first argument here is the event itself. So we have new colon message and it will take the message and our socket, which is our state. So in Erlang and Elixir, we just use recursion to recurse over our state and that's how processes kind of stay alive. And so we always have to receive our state and return it as well. So, and that's why we have these callbacks at the very end of every one of these function calls, we're returning a tuple that has the socket in it, which is our state. So we get this message and then we can say, cool, I got this message, now I want to broadcast it to everyone who's listening on this particular topic. So now that we see that, let's look at the client API. Now, warning, JavaScript might be hazardous to your health. So this is the JavaScript client, it's kind of an example of how you can use it. So you import the socket, this is the ES6 JavaScript syntax, it actually makes JavaScript slightly more bearable. And you can actually import it from the Phoenix package and then using that socket, we can set up a new connection. So we'll say we want to connect to our server at slash socket and we also want to connect with some parameters. So we can assign like a user token into the parameters that are sent back to the server. And then we can connect, we can then take that connection for the socket itself and set up a new channel. So we say socket.channel, we pass in a particular topic that we're interested in and then we get a handle on that channel and then we can actually join it. So once we join it, this is actually done in kind of a neat way where we send a message to the server, but we set up some callbacks to say that, hey, when this actually returns, then I want to do these things. So we can say, once I get a message back from the server, I can either get an ignore message. So if I get an ignore message, then we had some authentication errors. So we're not allowed to join or something like that. And if we receive an okay message, then we know that all is good and now we can do something else. And then we can also time out as well. So if we didn't get something back in a certain period of time, then we can fail. We can also set up callbacks for different channel events, so on close or on error and things like that. And then we can push. So this is the push API. When we want to send a message to the server, we say channel.push and then we specify a particular event name. So in this case, new message. And then we just provide a JavaScript object literal that will be serialized into JSON to be sent back to the server. And then, of course, we can listen for these messages as well. So when we send this message to the server, then our channel is going to broadcast it to all connected users, including ourselves. So this is the callback that will be hit when we get new messages that come in. Yeah. Yeah, so the question was can you use, can you build your own JavaScript client? And yes, absolutely, you could. So clients can be implemented in any language. And we'll go over that in a minute. So actually right here. So looking at this from the outside view, we have Phoenix as the server and we have all these connected clients. So clients can be implemented in any language. So if you did want to redo your own client itself because you don't like ours, that's no problem. Just build it yourself. So in this case, we can have iOS clients. I think there's actually we have clients built in Swift and Objective C. We also have an Android client that's available. And for the.NET community, it would be really awesome if we had clients in C sharp and F sharp and stuff like that. And actually for this particular talk, I was going to be working with Matthias and Thomas and trying to get them to build an F sharp client, but just we couldn't make the timing work. So, but how this works, we have all these clients connecting to the server. And then let's say you get to a point where you're exhausting the resources on that one machine. Well, with Erlang and Elixir, it's a distributed programming language. So no big deal. We can just add another node. And we can connect them together. So, and then we can have other clients connect to different nodes and everything still works. So all the messaging will still be delivered between all the nodes and all the clients. And it just works seamlessly thanks to Erlang's distribution. So that's all fine and dandy, but let's take a look on the inside and see what is interesting about this. So when you have a client that connects using this JavaScript client API here, socket.connect, there is a process that is added on the server. And when we say process in Erlang and Elixir, we don't mean an operating system process, but we mean a VM process. And this is an Erlang VM process. So it's a very, very lightweight process, very low memory, and it's isolated. It has its own memory access. No one can kind of reach in and touch it and mess with it. So it's very safe. So then that user socket is going to register itself with the PubSub system. Then the client can say, hey, channel.join on I want to join the lobby. So that message will come through the connection. It will hit the user socket. And the user socket will spawn a new process for the channel. So it will spawn this channel process. And that channel process will register itself or subscribe to that topic with the PubSub system. And same goes for any new topics that we want to spawn. So every topic has its own separate process. So if you have a bug in your code and you end up crashing a process, it doesn't affect any of the other running processes in the system. And this is really great. So again, Erlang processes are isolated and they're concurrent. They run concurrently through the system. So you don't have to worry about crashing one process and it affecting anything else. So when you want to broadcast, say this green process over here uses our broadcast API, it will send it to the PubSub system. And then the PubSub system will then send the message out to all of the processes that have subscribed to that topic. And then that will make its way to the user socket and then down to the client. So that's all great. You can do that with Node.js, right? However, of course, you don't have the fault tolerance and concurrency the way that you do with Erlang. But you can do it, right? And other systems allow you to do these same things. I'm sure you can do all this stuff with.NET and everything else. But we have something else that is quite interesting. So presence. Presence, I'm sure we're all familiar with what it kind of is, essentially, you know, chat systems and things like that usually implement these where, hey, I'm online. Cool. I'm going to tell everyone that you're online. And then you go offline. I'm going to tell everyone that you go offline. So it's this way of tracking processes to see who's online who's not. Sounds pretty trivial, right? Well, this definitely seems trivial on the surface. But there's a back story to this. Like, why would we actually implement something so trivial within a web services framework? And really, the story goes that, so this is Chris McCord. He's the creator of Phoenix. And then we have Jose Valim, who's the creator of Elixir. And these guys are both, we all work together on Phoenix together. And so, you know, the hello world of Erlang and Elixir is building a chat server. So everyone builds a chat server because it's a really good thing to do in Erlang. It's very easy. So a lot of people come in, you know, to IRC or the mailing list and ask, how do I implement presence? How do I keep track of who's online and stuff and make sure that it's actually the right results when people go offline and things like that. And so there would be a lot of answers on IRC and on the mailing list and things like that. And so you'd have all these people trying to figure out how to do this and getting different answers and from different people. Because really, in Erlang and Elixir, we have all these different awesome ways to store state and be notified when processes come and go and things like that. So we have things like agents and gen servers and wonderful things like that. So it makes the problem seem pretty trivial. So Chris and Jose decided, hey, we're seeing all these people having this question and this problem, how to solve it. What are we going to do? It would be nice if we had one kind of central thing that says, hey, this is how you do this. So I got an idea. Let's create a blog post that tells everyone how to implement this because this is really trivial. So they decided, let's get together on Skype and we'll kind of figure this out and hash it out and we'll write this blog post. So three hours later, they're still trying to work out all of the kind of edge cases that happen when you have this type of problem. And again, it sounds trivial when you have one machine, but what happens when you have a distributed system? And so this is something that people don't normally do when they're developing on the machine. They have one node running locally and they go off and they just store all this presence information in an agent and everything is good. It's all awesome, working. But then in production, you want to deploy your application and now you have two different agents running on two different machines. So we have Joe, Robert and Mike connected to node one and they have their own agent and Chris, Jose and myself, we join node two. But now when you list the presences on each one of these machines, each node only knows what's local, right? So this is a problem. It's obviously not correct. So what a lot of different people do is they go, ah, we need a central source of truth. We need a shared state access. So they reach for the database. They go, ah, we'll just put it in Redis, right? Redis is awesome, right? It's quick, it's fast, you know? It's great for that. But this is, it works, yeah. This solution would work for time being, but it still has the problem that when your node catches on fire and it's offline, right? So now you have a big problem because you have orphan data. Node two crashed and didn't have a chance to actually clean out all the state from its node. So all these users are now offline, but node one doesn't know that. So node one still thinks everyone's online. So the other problem is that when you have net splits, right? So node two is now not able to talk to any of the other machines, or maybe it just can't talk to Redis. But it might still have access from the outside world. And so now each of these nodes are getting more connected users, but this node two can't tell anyone, can't tell these other guys, hey, there's more users on my machine. So we have this problem with this, and it's, you know, there's this thing called CAP theorem. And we probably have all, have probably heard of this, and it's consistency, availability, and partition tolerance. So you have to choose two, though. You can't have all three at the same time. So, and the other problem is you have to choose partition tolerance because you cannot, your network is not perfect. The network is going to fail you. And so we have to choose either consistency and partition tolerance, or availability and partition tolerance. So availability says, hey, you know, we might not have, we might not have a connection, but we still can operate independently with no problem, and still serve customers and what not. But when you have this, you have data inconsistencies. So your data is going to be out of sync. So this is where conflict free replicated data types come into play, so or CRDT for short. So CRDTs are used to replicate data across a cluster. So, and the cool thing is, is that it's all about executing updates without the need for remote synchronization on every operation. So this gives you what's called strong eventual consistency. So you are not going to be consistent all the time, however, eventually, as long as all the nodes eventually get all the messages, then all the nodes in the cluster will be caught up and they'll be good to go. So CRDTs are something that have been kind of in the academic science and research departments lately in the last few years, and it's all basically proved on with math. So you essentially have these data types that are conflicts are mathematically impossible, and that is really cool. So, and replication without remote synchronization, you know, some kind of systems require that you, you know, every time there's an update, you have to kind of make sure that all everything gets the data before you can commit and say, okay, good, we can move on. So that's kind of a problem when you have net splits because you now have machines that are off doing their own thing and not receiving it. So this particular paper is the paper that was used to implement the CRDT that's in Phoenix, and it's called the Delta state conflict free replicated data type. And so the difference is that some CRDTs require that all the state is shipped to every node, and that is inefficient. And so these guys were able to come up with a solution to do it that doesn't require that, and you can actually just do it with deltas. So you just ship your diffs, and they can work themselves out, and it works really nice. So this is kind of an illustration of the problem. So we have three nodes in a cluster. We have Joe that connects to node one. We have Robert on node two and Mike on node three. And these names that I'm using are, you know, Joe, Armstrong, Mike Williams, and Robert Verding, just so you know. So we have Joe, Joe's node is going to tell node two that Joe's online. However, the message hasn't got there yet because networks are not perfect. Sometimes messages arrive out of sync or not in time, right? But node two is able to, I'm sorry, node one is able to communicate faster to node three, and node two is able to send data over to node one. So these guys kind of know what's going on. So now node one has Joe and Robert. Node two just still has Robert because it hasn't received its updates yet from the other nodes. And then node three knows about Mike, Joe, and Robert. So but what happens when Joe goes offline with that message is still in route to node two that Joe is online? And then node one actually sends its diff over to node three, and it tells it updates node three. And now node three is actually going to send a message to node two to catch it up on different state events. And so these messages could come and arrive at any time and out of order. So how should node two figure out what to do? Does it discard Joe or does it add Joe? And what exactly, how can we track this stuff? Can you do it with just tracking time stamps? So node two tags the operation with a time stamp that says, hey, at this time, this is when this operation happened, now I'm going to send it over here. Well, time doesn't really exist in distributed systems. You don't know whether there's clock drift happening and all that stuff. So you can't worry about, you can't rely on time because this is a big source of problems. So instead of working on clocks, we have vector clocks. And so vector clocks are essentially a way of, it's basically a counter. And so each node that's in the cluster can have these counters that they keep track of. And so every time node one sees an event, it starts off at zero, it sees an event, okay, cool, and then it bumps its counter. Sees another event, bumps its counter. So great. This is what it kind of looks like in a three node cluster. So we have a three tuple of each event for each node. So node one has an event and so it sees it. Now it sends that state over to node two. Node two sees an event as node one is actually sending it some state, so it's seeing that event, so it bumps its counter. But between the time of node one sending its state to node two, we have node three has also seen an event, so it's bumped its counter. And then it sees another event, so it bumps its counter. And then it sees another event and bumps its counter. So now node three is at three events and it hasn't been synchronized with the cluster. So, but node two then, I'm sorry, node three now tries to synchronize with node two, so it sends its event and now we have node two has seen two events, node one still has only seen one. So this node two sends event up to node one and now they're all caught up. So that's just kind of a brief introduction to how CRDTs work. There's a lot of research that has to go into it and I'm still reading through that paper to try and understand it. There's a lot of crazy math that's going on. So maybe we can hold the questions for the end. So let's go on to the actual API of everything. So we have a room channel. This is kind of stripped out all the other chat, you know, chat-related stuff, but this just shows you how to use presence. So we have the, so let's just say that after you join, you want to grab out and present to the user the current list of users that are online. So you can use the push API, so you can say push on the socket, this event called present state and it is going to be the actual data is going to be a list of users online. And then we say presence.track. So presence.track is going to take a particular topic, so in this case if we're on room lobby topic, then it will track our process on that particular topic. We can pass in an ID, it could be any value, whatever you use for your system to identify users, it could be a UUID, it could be whatever. And then we can add some metadata that we want to track for that particular process. So we could say, yeah, status is available, we're online. And then we have the client API, again some more JavaScript. So with here we also have added a presence module into our Phoenix package. And this allows us to do the same stuff we saw before, setting up the socket, connecting, starting a channel. And then we can take and have a JavaScript object that will be our thing that we track all the state. So it starts off as an empty object. But then when we get a present state event, we're going to take that state and we're going to use the presence.sync state API. So it takes the current state and the new state, and then it will actually return to us the updated state. And same thing for this presence diff event. Now if you'll notice, I didn't have to add anything into the actual channel to handle this presence diff thing, because it's handled by the system itself, it's handled for you. And then we also have this presence.list function that you can call to maybe format your presence data differently depending on your application. So it was time for a demo, but unfortunately my laptop doesn't want to connect to the thing. So I'm borrowing my friend's laptop. We didn't have enough time to get all of the stuff on here, so that's kind of a shame. But wait, there's more. This presence feature is kind of like an accidental service discovery type mechanism. So in distributed systems, we have all these nodes and they have different things that they can do. And maybe you have an email service or something like that. And when you have multiple nodes running, you can actually have those email services on each node. And each node locally can take advantage of all those types of things. But on occasion, you want to be able to do some distributed computation, or you want to find out, oh, well, you know, the email service on my local node is actually really busy. So instead, I'm going to see if there's any others available in the cluster, and I can send the work over there. So similar to presence for users, you can also have presence for your different services. And this is one of the things that's really wonderful about Erlang and Elixir is that, I mean, every process in the system is basically like a little service. It's isolated. It's independent from all the others. It's running concurrently. It can do work. All you have to do is just send it a message to tell it, hey, can you do this thing for me? Or can I have this bit of data? Can you do this computation? So in this case, over here, instead of pushing around user presence, we can just push down the process presence for email services. So we can see here, in this case, every single node is tracking that there is two email services. So node one and node two has one. And of course, in this particular example, node three just hasn't shared its thing. So in the end, there would be three email services available. So how can we use this? So this, just like we would with tracking the presence of users, we could have a topic called services. And the services topic, we can register our service as an email service. And then we can give it some metadata. And this metadata will be tracked for that particular process. So we can say that the process ID or PID is self, and this is in Erlang and Elixir, the self refers to the current process, my own process. And then we might have, like, a max jobs. So I can only do 100 things at one time or take on, like, a queue of 100 things. And then, you know, my current workload. So this is when you're starting up your process, you can say, well, I'm starting off at zero workload. And then we can have a presence list, presence.list, which is this is all the same API. You know, give me all the services. So then it will tell you, here's the, all the services available within the cluster. So we have processes that are registering themselves under the email service. And the cool thing about this is that much like you can with user presence, right, if I open a bunch of different browser tabs, those are all distinct presence information. So for every browser tab that you open, or maybe I have a mobile client and I'm connected on my mobile client, I have presence information that's in the cluster available for that. And same thing for here. So these email services on this, on that diagram I was showing, there would be an entry in this email list here for every single node on the cluster. And this, of course, is tracked and replicated across the cluster, which is really cool. So this is some pseudocode of what could be done, right? We can write a wrapper around presence to actually expose a different way of actually dealing with this kind of stuff. So maybe we could say service.track and we can give it some process ID that we want to say, hey, I want to register this process under the email service and then we can give it the metadata as we did before. And this will then basically use the presence API underneath the HUD and allow us to do that. Then we can maybe create a function called all and we can say service.all and it will go out and give us all the services in the cluster. And then we can say service.listemail and this would give us a list of all the services across the cluster that are registered as an email service. So, and then, of course, you could potentially get some, build your own kind of routing things where you might want to round robin between the services or maybe you want to say, oh, depending on the amount of max jobs versus the current load, I want to choose the one with the smallest amount of load. So it's totally, all this stuff is totally possible. And I think it's pretty amazing. So, all right, so I'm running towards the end. The demo is actually going to be a little bit longer of a thing and pretty exciting but unfortunately, we're not able to pull that off. So I do have now some time to give you some more details on how the Erlang VM works if you're up for it. So, but before we do that, we also have phoenixframing.org that has all kinds of awesome guides to show you how to use it and get started with it. From building just your standard APIs using the MBC model or also, of course, the channel stuff. And, but, yeah, so this stuff is really, really exciting. And the thing, again, that really sets phoenix apart from most other things that you have available out there is the fact that we run on Erlang. And Erlang is a really, really amazing and special thing. And essentially how it works is the Erlang VM is like an operating system for your code. And it allows, it knows exactly how to run your code in the most efficient way possible. And it also is a system that will run on all the cores on your machine. So essentially this is how it kind of looks. You have your operating system, your host operating system, and then you have the runtime system, the Erlang runtime system. And you have all these different pieces of Erlang that kind of plug together. So you have the kernel system, standard libraries. You have this thing called OTP. And OTP is a framework that is kind of an abstraction around processes. It allows you to kind of not have to do, think of processes in this low level way, but you can actually do it more high level and have these callbacks that you can do. And so it allows you to write your code in a very synchronous looking fashion, but how everything works under the covers, it's all happening asynchronously. So these things, processes. Processes is like really the most awesome thing about Erlang because they're all independent actors. And each process is totally isolated and they're really lightweight. So they're very lightweight threads of computation. They maintain their state by recursing over themselves. And communication is done via message passing. So when you want to communicate from one process to another, you just say, hey, I want to send you a message. So you get this process ID, you send it a message. And this message in this case is going to be this three element tuple of an operation that you want to do. So hey, I want to add two numbers together or a list of numbers together. So we could send this over with send our own process ID so that this process knows where to send the thing back. This one gets it and receives it into its mailbox and then it says, okay, hey, cool. I know how to add that number together, so I'll go ahead and do that for you. And then I will send you back a message and say, okay, and here's the result. So each process in the system has its own memory space. It's got its own stack, its own heap, and it's isolated from all other processes in the system. So you have immutable data and you also have isolation. And this is really, really the key to having concurrent systems. You can't have your memory being eaten at from other things, other zombies around the system. Each process has its own little mailbox, so this is how it receives messages in the system. It also has this idea of links and monitors. So if I start a process and I want to know if that thing dies, but in order for me to actually do my job, that thing has to be alive. So if I need to ensure that that thing dies, that I also need to die as well, I would link to it. And then there's a different way, which is called monitoring, which is to say, hey, I want to keep an eye on this process, but I don't want to die, I just want to be notified when it dies and I'll receive a message so I can handle that. And then garbage collection is pretty awesome in Erlang. There's actually probably like 17 different garbage collectors in the language itself or in the runtime, but one of the coolest things about Erlang is the fact that garbage collection is done on a per-process level. So in most other VMs, garbage collection happens at a particular moment in time where it says, oh my god, we got to freaking collect all the garbage. So it stops the whole system and it collects all the garbage and so you end up with these spikes and things in your performance. So Erlang gets around that with this mechanism. And so the beam itself is lovely. Every time you start a new node, it will create one operating system process, but then for every single core that you have on your machine, it will spawn a scheduler. So if you have eight cores, you get eight schedulers and each scheduler has its own run queue. So as you spawn processes, those processes will be distributed across all the cores on your machine. So you get actual real parallelism instead of everything just being kind of scheduled on one single thread. And Joe was touching upon that yesterday. So in the end, Erlang is absolutely amazing. Crashes are completely isolated. You get data isolation. Garbage collection is handled per process. And the runtime load balances all of the processes across all the cores on your machine. So you get the ability to have parallel computation. And with that, I think that is all I have. So it looks like I have time for some questions. Yes, sir? You mentioned consistency, a time back. And you also mentioned that when processes crashes, they don't affect anything else. But what if like a process crashes as it happens, a crash, or if the system crashes, like you have a way to persist the messages to make sure that they're still there and ready to be handled when the process comes back up or to roll back the handling of the messages and this kind of stuff? Sure. So to rephrase or replay the question, talking about if a process is sending a message and it dies, how do you ensure that messages aren't lost, right? So there's all kinds of different ways that you can do that. There's no silver bullet. But one of the things that's amazing with Erlang is these things called supervision trees. So you can have processes that act as a supervisor of other processes. And that's where that monitoring comes in. So if I start up a supervisor, I can say, hey, all of the processes that you start or get started underneath you, you can monitor them. And when one dies, you can be notified and then you can start that process again with its initial state. So you can restart the process. It's actually going to be creating a new one in its place, really. But you could wire in some mechanism to where it would go and try and fetch some kind of state from some place. But a lot of times, why you want to crash, why processes will crash is because of some data, some bad state. And so much like your computer, you want to, you know, sometimes you just have to turn it off. Have you tried turning it off and on again? You know, it's that kind of thing. So yeah, I mean, you can definitely get around some of those things, but obviously there's always a chance that you'll have data loss in the case that you haven't designed your system in a way to handle that. So there's no inherent like use mechanism for handling messages. Yeah, well, you would probably, you would need to do some coordination between processes to ensure that, you know, you can store some, some state in one process and it can spawn off another process that goes and tries to do the work and has a copy of the data. And if it dies and doesn't doesn't get back to you, then you know that you need to spawn another one, you know, stuff like that. So yeah. In the questions. Yes. So you can potentially, I mean, the process is more contained units of calculation. Potentially you can have hundreds of thousands of processes, all working those in touch. And potentially So it's only noisy if there's actually data moving across the wire. So the question is that processes are isolated and they're all running concurrently and you could potentially have hundreds of thousands of them, millions of them, and yes, that's true. Erlang systems can handle many millions of concurrent processes running in the system as they are very lightweight. And we've actually, with Phoenix, we've actually done some benchmarking on how the channels work and the processes work, and we were able to get two million clients connected and join a single channel and we're able to actually send a message out to all those channels, or all those processes, all two million processes, send a big giant Wikipedia document, and that document took only two seconds to reach all of the connected clients. And so the problem with that, though, of course, was actually in trying to benchmark that was not the, this was also a single server. So we had a big machine that had lots of memory and lots of cores and were able to handle this traffic. But the problem with that actually was trying to get the clients to connect because generally, when you try to have, send out outgoing sockets, you kind of are limited to like 65,000 connections. And so we had to end up like spinning up 45 different servers to try and actually connect all these clients together to the server. So it was a bit of a trickery, but we ended up doing it. So I hope that answers your question. So yes? Yeah. One point I want to clarify is when you're adding under node to your Phoenix application, just coming from a root background or node, when you, you know, one of the instance, you basically have a low balancer in front that points to the server behind. Like how do nodes know about each other? Is that a whole seasoning? So the question is around distribution and connecting nodes together. And essentially the question is, does Phoenix handle that for you? And really the answer is Phoenix doesn't handle that for you, but Erlang handles that for you. So you can actually tell the nodes to connect and they will connect through Erlang's distribution mechanism, which is all built into Erlang. And then now you automatically can schedule, send messages across the cluster. And that's one of the things that's amazing about Erlang is that message passing is location transparent. I can send a message to a process and it doesn't matter which node it's on. The VMs in the whole thing will take care of routing that message to the correct process. And so that's really, really powerful. Anyone else? I can't see. Is this like blind? Okay. Awesome. Thank you. Oh, also, if you're interested at 140, 140? 140. 140 in room 10, I think it is. We will be doing the functional programming lab. And if you guys want to see that demo that I was going to show on the projector, that wouldn't work. I will be happy to meet you there and I can show you exactly how it works. So thank you very much.
Channels are a really exciting and powerful part of the Phoenix Framework. They allow us to easily add soft-realtime features to our applications. Channels are based on a simple idea - sending and receiving messages. In this talk, we'll take a look under the hood and learn how to build incredibly powerful, event driven systems with Phoenix Channels. We'll also look at how Phoenix makes it easier to track presence in a distributed system.
10.5446/51833 (DOI)
So afternoon session on Power BI for the developer, the story on how you may integrate, embed or extend. Very brief introduction on myself, Peter Myers. I'm a business intelligence consultant. I've worked for some 15 years working with Microsoft business intelligence products, mainly with SQL Server delivering enterprise data warehousing solutions, but in more recent years. As Microsoft have reached out into self-service BI, tools and this new generation with Power BI, I've become an expert in this space and I produce the training content from Microsoft and I have the good fortune of sharing the message about what Power BI can do for users and also for developers. So in this 60-minute session, I'm interested to know, first of all, who has already had any experience with Power BI before? And so I'd poll at about one-third of the room and for those one-third, has that been any developer-related activity? Working programmatically with Power BI, two of you out of the third. All right, well, Power BI is a huge topic and 60 minutes would never do justice to the topic. And so I'm going to begin with an introduction that describes the fundamentals about what Power BI can achieve and then focus on the topic relevant to developers about integrate, extend and embed. Need to start somewhere. So the story with business intelligence is what we call a first wave is that businesses needed to get answers from the data that they store. Common business questions needed to be addressed and this was achieved through business intelligence. In the earlier days, back when I started, this meant enterprise data warehousing. It meant IT, long-term projects. And what we discovered is that perhaps 60, 80 percent of the business questions could be addressed through the data warehouse. It could never address every single business question. That arose then a second wave in BI, which was to deliver self-service capabilities to the business users themselves. Probably not to everybody, probably to a small select few. We might call them power users or advanced analysts. And these guys or girls are literally whizzers when it comes to Excel. They drive some amazingly sophisticated solutions. So Microsoft turned around back in the Office 2010 release and they delivered Power Pivot later on Power Query and Power View, supporting very similar capabilities to what the enterprise corporate BI would deliver, but on the desktop handling smaller, more focused projects. What I can describe to you today is that Power BI really delivers a third wave in BI, moving beyond just the analysts that are still reasonably well-skilled and proficient experts to the business users themselves. Those that have the questions can engage directly with a service to answer those questions. All right, so common BI challenges. How do we get an end-to-end perspective across all data, regardless of what type, what format, whether it's on-premises, whether it's in the cloud? And how do we deliver that right data to the right people and at the right time? Well, these are common challenges with business intelligence. And what Power BI delivers is a cloud-based analytics server, or service, excuse me, that is there to address most of those common business challenges. A quick list of key differentiators and attributes of Power BI is that it has software as a service, services available. So with a click and a matter of minutes, if you need your Google analytics data and you need to see it, you need to interact with it. You can then authenticate through Power BI, enable it to connect to Google analytics and build in reports and dashboards, enabling you within perhaps five minutes to start exploring, interacting with your data, and also potentially sharing this with others within the organization. What I'll talk to you and demonstrate today is real-time dashboard updates and how this can be achieved in two distinctly different techniques. The other ability for Power BI is to connect to data not just in the cloud but on-premises as well. So we need to respect that as a cloud-based service, how is it going to interact with your corporate stores? Perhaps your analysis services data models, your SQL server databases on-prem. And so the deal there is that Power BI also comes with gateways, that when installed on-premises, they create a secure channel up to the service, enabling either a scenario of real-time reporting, so it passes straight through to on-prem, results come straight back up, we call that a direct query mode, or if you're going to work with importing of data, you can schedule data refreshers to take place perhaps on an hourly or so basis via the gateways. Data exploration delivered through Q&A, so this is very much the web search engine concept that these days we start asking questions and amazingly, those questions get translated into reasonably decent search results. So Power BI also supports in some cases the ability to ask questions of your data and to have responses delivered back through visualizations. It even extends to the Cortana story, that if you configure it correctly, you can ask Cortana and Cortana will receive the Power BI reports and display them on your device. Integration with many Microsoft products, we'll talk about other Azure integration scenarios, specifically with stream analytics or machine learning or big data. And then finally, the ability to work with hybrid configurations and to work seamlessly within your Azure directories and within your infrastructures. I could really talk all day on those topics, but to finish off with the overview, when it comes to accessing data, it comes in many locations, shapes and formats, whether they are services or on-premises data, whether it's you distributing content within your own internal content packs, connecting to other Azure services, working with data in Excel files, and ultimately the Power BI desktop, which I'll work with today, Power BI desktop ultimately provides you a tool on the desktop enabling you to connect to these variety of sources, load in, integrate if necessary, enrich with exploration and hierarchies, enrich with business logic, build reports, and as a single file, the application is designed to publish it back to the service, whereupon you can build dashboards, configure automatic data refresh, enable Q&A natural querying, and then share this with others so they can access it through the device or the web browser of their choice. Note there, there's a REST API that supports a development story for the creation and manipulation of data. Essentially, it's a push data scenario that can be achieved with the service. Up front, before those questions might be in your mind, how much is this going to cost me? It may cost you absolutely nothing. All right, so there are two licensing tiers. The first is free, and that provides a rather generous one gigabyte of storage and a rather generous set of features, including some dev scenarios that may not cost you anything. If you're looking for the full feature set, and I'll provide you at the end of this presentation some differences in features between the two different tiers, then you'll be looking at the price per month per user. And I know in the US this comes to $10 in your region, I'm not exactly sure. Be aware that if you want to trial this, when you sign up for free, Microsoft are also very generous to provide you with a 60-day trial to allow you to try all of the pro features. All right, now Power BI, huge topic. With the focus just on developers, really, we're going to talk about how we could create real-time dashboards, how we could integrate Power BI into applications, and how we could also develop and integrate custom visuals to extend beyond the already 26 visualizations that come with Power BI. Are there any questions? I've set the scene. All right, let's begin with an activity. I've got a dashboard here, and what I'd like you to do is help and participate by filling in a brief survey. That brief survey will be asking six questions. So would you mind either use the QR code or use your mobile device to navigate to this URL? And what I'll ask you to do is answer the six questions. And my apologies up front. One of them is what city or the nearest city you come from. There are 600 items in a drop-down list. It was just the easiest way to quickly get this out. But I ask you not to submit, to simply fill in answers. They don't have to be true. You may be funny or you may use an alias. No details that you submit will be used against you. So let me explain what's going to happen. That when you submit, it's going to on the web server, and it's using Azure websites, it's going to go ahead and throw an event onto an Azure Service Bus Event Hub, and then Stream Analytics is going to query it live and push it to the dashboard. So don't hit submit yet. There's a nice experience. You'll see it when there's this influx of events coming through. It looks quite good. So what you see here is a very rudimentary dashboard consisting just of static data. No data at this stage. All right. Assuming that nobody else needs the URL I'm going to exit the full screen mode and I'll point out to you that here in the navigation pane, and the navigation pane exposes to me in my workspace the three groupings of fundamental objects in Power BI. Data sets that connect to data, and that full range of sources that I described to you will ultimately be delivered to you as a data set. Data sets are then the foundation upon which reports are built, providing rich interactive experiences, report consisting of multiple pages, and ultimately when you want to share key metrics, we then pin visuals from reports up to dashboards, and then we share the dashboards out. Still not clicking submit at this stage, I would hope. Here I am clicking on the data set. So Azure Stream Analytics has pushed the data set definition through. Take a look on the right-hand side, and you'll see I have a single table and a collection of fields that map to inputs coming from you. What I'm going to do at this stage is just build out a dashboard, and then when the dashboard is being shown, I'll ask you to submit your survey results. So I'll just ask for a single field, which is the number of submissions. And that's shown to me here in a column chart. I'm going to switch it across to a card. I can tell you that five people hit submit. Not a problem. All right, so already six messages have been read from the event hub. And what I'll do, and I have to do this, I need to save this report. So let me just call this demo. And having saved it as a report, I'm then able to pin a visual to a dashboard. All right, and while I'm here, one of the questions was where are you from? So I'm going to build out using a map that's related to all of the cities in that drop-down list with latitude and longitude. So let me bring in lat and long. And I'll just plot the number of submissions. And so I can say that the six people that submitted were coming largely from, I guess that's Oslo, right? And then what I'm going to do is save this and pin this one. And I think I made a slight error before. I erased that one. Let me just reconstruct it. Save. And pin. Now switching across to the dashboard, and I will now remove these URL rolls. In fact, let me just push them to the side. Oops. There is a, excuse me, there is a URL. I didn't mean to click. All right, so let's bring the number of submissions here. And I should have a map of the world somewhere. I'll have two of those. Have the map of the world somewhere. Down here. So a dashboard, a collection of tiles not necessarily being sourced from the same data set. When we drive a car, what we like to see in our dashboard is relevant metrics. We don't want to be overwhelmed by lots of irrelevant data because that distracts us and it's unsafe. So I'd like to see the speed. I'd like to see distance. I'd like to see fuel. And if fuel's running low I would like to see some status indicator like red telling me off track takes some action. And that's essentially what dashboards are designed to express. Let me clean this up a little by editing the tile. It might be nicer here to just call this survey responses. So we can provide titles and subtitles. This would be NDC Oslo 2016. Custom URLs could be set here as well. And then this one would be submissions by city. Before you still hit submit, to be able to demonstrate the Q&A capability I can ask questions of the data itself. So it might be something like show me average distance. So for the eight people that submitted I could learn here that the average distance travel was 1,231 kilometers. And we could make this a little more interesting by breaking this down by occupation. And that looks pretty cool. So let me pin that visual. So here's a second technique by which we can build up dashboards. And let me change that now to gender. And I'll pin it. And I think we also had age group. There it is there. And I could say sort by age group. And for this one I might also customize using the full styling that's available even when reporting. Like for example, let's add some data labels. Let's display them as none with zero decimal places and increase the font size of those. So you can see how rapidly you can build out these dashboards. Switching back to the dashboard page then. This is now what I see for the details that are so far been submitted. Let me then put this into a full screen mode. It's not going to work nicely with these. For those that are ready, why don't you just go ahead and submit your results and see what happens here on the dashboard. Okay, literally real time. With the backing of Azure there, event hubs can ingest millions of events per second. Streaming analytics. And I'll point this out to you. I won't build the solution but I'm going to show you the screens for it. Stream analytics will aggregate across time periods. Perhaps every second take the average, take the sum, take the count and output the result of aggregated events and push them to Power BI. Power BI receives it as a data set. We can build reports on it and the dashboards that are based on this type of data set will reflect real time values. What do you think? Literally no code. Taking the assumption that someone was responsible for the events arriving in the event hub, the story beyond the event hub, listening to events, the streaming analytics is pretty straightforward. Any questions at this stage? So what do we have out of statistics? We have how many people? I should have put numbers on here. So just as an idea, those that are between 95 and 99 traveled on average 11,000 kilometers. I'm thrilled that the people at that senior age would make the effort to come here to Oslo. My home city of Melbourne won, so I know that would be me. And from the people coming from Oslo, how can we just got 120 people a local? Thank you. It gives you a taste of what can be achieved. So let's talk in more detail about how we can extend Power BI, a closer inspection of the rest API, and how you might integrate Power BI with Azure services, the very new and still in preview Azure service called Power BI embedded, and then we'll talk about custom visuals also. So there are four very exciting opportunities that should inspire developers to consider Power BI, whether it's an integration, embed, or extend scenario. Let's start with a discussion then on the Power BI rest API. This rest API has been designed to interrogate Power BI to create data sets, create tables, and to push data to those tables, and when necessary to delete all data from those tables. So it programmatically manages resources, and as you push data into data set tables, they will be reflected real time into the dashboards that you've just seen. So essentially what Stream Analytics is doing is nothing more than what you can do and it's using the rest API to push the results of streaming queries into Power BI data sets. Alright, so in Power BI at the very top level when you authenticate, you're connecting to a workspace, and that could be a personal workspace or it could be a group. So one of the pro features in Power BI is that you can create groups and invite members into groups and collaborate on shared resources. As a best practice, if you're pushing real time data, you shouldn't push it to an individual's account. You would create a group, you'd create a user that is not a real person, you would make them a member of a group, you would push the data set definition to the group, and then you would allow collaboration on that and potential sharing from outside the group. The other benefit of course is that the individual leaves the organization and then you've lost all of that work. So please work with groups and note that groups are a pro feature, so it will require that $10 per user per month. So at that very top level, whether it's a personal workspace but much more preferably that it's a group, it contains a collection of data sets and in turn data sets contain collections of tables and tables contain rows. It's a very simple object model. In order to make this work, you will need to register an application and so if I just open up a web browser like this, there's an app registration tool that when you sign in with your Power BI account, you can provide a name for the app whether it's a server-side web app or a native app, in which case you'll need a redirect URL for web authentication and then tick, tick, tick, and I do not know why these ticks aren't working. Last time I used this a matter of weeks ago, those ticks used to work, so I have no idea. The good news is I don't need to create one. When you register the app, you will get a GUID back and essentially that is what you will use when using the REST API. It will be delegated permissions and the permissions could be whether they're allowed to view certain assets, whether they're allowed to insert or maintain data and you can see the list of them here. From an authentication perspective, depending on whether it's a native client or web app, the flow is somewhat different. I am not going to read through the entire flow but it should not be surprising to anyone else that's working and engaging with authentication on web service calls. A closer look at the operation, so the ability to enumerate, having authenticated and connected to a workspace, what data sets already exist. Data sets are described in terms of a unique identifier, which is a GUID and by friendly name. You can then create a data set that would require a post, an adjacent document describing the data set in terms of the collection of tables and in terms of the collection of columns for each table. The data types are reasonably straightforward. You can also enumerate tables contained within a data set. You can update an existing table schema. I can't think of a great reason why this would happen often but you might decide that we need a new column in a table. So if you do an update schema, hopefully, if you don't break the order of the columns, you can maintain persisted data and potentially add new columns or remove a column if you no longer need it. Add rows to a data set. So this is where you push data up and then you may clear all rows from a table. There's the ability to also list groups. So part of your application experience might well be you'll let the user authenticate and then you'll present them a list of, it's either your workspace you want to publish to or it's a group that you remember. So you can list the groups and then from a group you can call the list data sets. List dashboards and tiles have been added more recently. These are in preview. So this supports another embedding scenario that is that if you get the unique handle for a dashboard or for a report or a tile, you may embed these into other web applications. They will require authentication so you couldn't publicly make those resources available. Just take a quick look, listing all data sets. You'll send a request that looks like this. So you have an endpoint. You will need to add an authentication token and then it'll just basically be a request like this with a response coming back as a JSON document looking like this. There you see each data set in terms of an ID and a name. When it comes to inserting data, a post request, so you'll note what's been built out here is the GUID for the data set that you could have retrieved through an earlier call. It has a collection of tables. Tables have unique names, collections of rows and here you may push a collection of one or more rows up to the data set. Some restrictions. The maximum rows per single push will be 10,000. That just means you're going to chunk things down. If you really have that volume, chunk it down into 10,000 groups. But there is a restriction nonetheless that says that you can only send 10,000 rows per hour if you're on the free license and a million rows per hour if you're on the pro license. So I've noticed that if you take that million and you divide it by the number of seconds in that hour, that's actually the limit. I thought that it might let me do 900,000 in the first minute and maybe 100,000 in the remainder of the hour but it doesn't work that way. So it's actually an average. If you look at it per second, that is the maximum rate you can achieve. If you exceed that, the service will let you know and you will have to expect to handle an error. Maximum rows stored per table will be 5 million. And so at that point you'll end up with errors. You'll be forced to delete the table but there is an answer to this and that is that if you create the table using the basic FIFO method then it will store a maximum of 200,000. And imagine then when you insert the 200 and first record, 200,000 and first, it will store it but it will drop off the very first row. So you're moving forward then with 200,000 rows of data. Maximum pending requests at any stage would be 5. So let's take a look at a real-time dashboard implementation. And I'm not having a good feeling about this but let me see what happens. I'm having some very strange networking issues and certainly when I was setting up hoping that I had another 10 minutes but let's see what happens. One issue is that the Azure portal simply won't load. Let me just try that again. And I have absolutely no explanation for this. I use this VM all the time. And with that not loading, it leaves me with a bad feeling. Anyway, this application's been designed to show you how the API calls work. And while you wouldn't ordinarily wouldn't expose these types of options, they would be in a config file, that registered app, its GUID, would get stored in here. The redirect URI. The other URIs are fixed. So according to the API documentation, these are the endpoints that you'll communicate with for Power BI. Then you will come to connect. And at this stage what it wants to do is authenticate using a Power BI account. And on the first of authentication, it's going to return a token that must be placed in the header of all requests that follow. And I am double finger crossing here. And that's a different error and that's probably for a good reason. It worked perfectly in rehearsal 30 minutes ago. And I was so confident I said I don't need to bring the recordings. So that's a real shame because what it would show you in this tree view is the full structure of the workspace that I connect to, the collection of data sets, reports, dashboards, drill into data sets, there are the tables, create data set, create table. And I would have shown you the whole flow. I'm stuck. I'm sorry. There's absolutely nothing I can do. But I have more to talk about. And I can talk about this scenario. This one does work and this is what supported the opening demonstration with the surveys that you submitted. So the story with Power BI and Azure, there's a growing story with the Azure ecosystem. You can see clearly Microsoft putting Power BI as the presentation layer across the growing number of enterprise services, whether it's Azure SQL Database, Azure SQL Data Warehouse, even big data with HD Insights Spark. And then we see the Azure Machine Learning and we also see the Azure HD Insight with Hadoop. And so already we can connect to many of these Azure services and build Power BI solutions. But really, with a focus on a more developer-centric solution, what I'll share with you is that you can couple certain services together. And so really the recipe of how I delivered the real-time dashboard, just out of interest. No, where is my dashboard page? So with 57 responses coming in, what's driving this? Is that upon submitting your survey, it's adding an event to a service bus event hub. Listening is a stream analytics job that is then pushing an output to Power BI. So what I can show you here is, first of all, I've created and let's just assume the event hubs looked after. That's somebody else's project to push events upon some requirement. Here I've just gone into Azure service bus and I've created an input named Power BI Survey. And essentially it's just reaching and listening across to an event hub named this. Next, there's a query. And for anybody that works with SQL State Pro, I'm going to go ahead and show you anybody that works with SQL statements. This will look rather familiar. I'm selecting from the event details various fields. There's also not a lot but a sufficient number of functions that allow me to transform or cast because essentially those values were in JSON. They need to be cast to their correct type. That's important in Power BI because if we want to perform aggregation and summarization of values, they will need to be numeric. And essentially it's reading from the input that is reading from the event hub. Now, commonly, and not in this example because I want every single survey result to be sent through. But when there's the ingestion of millions of events, you're not interested in projecting those onto a dashboard. You're interested in summarizing them by time periods. So what you would do is add a group by in exactly the same way you would use to summarize a relational query result. And there's a special function there called tumbling or hopping window. And you can say the tumbling window is for every two seconds. So perform this query group by producing aggregate result in two seconds. Whatever events come in, aggregate push out. And the output for this query is defined by the into, into PBI workspace which is defined inside the job. And if we take a closer look at this, it's pushing to Power BI and to a data set name survey to a table name survey. Alright, so if I was to create a new output, nope, okay, the job's running so I can't actually touch it. That's how it worked. And so how much code was involved with that? Providing you had the events on an event hub, and this is a good IOT story, by the way, then it's just a matter of writing the right query, setting up the inputs and output of a job. You will need to when you build the output authenticate to Power BI so some account has to be used. The preference is that it's not going to be your personal account, that you've created a non-human account and that you've created a group that it belongs to and you're pushing the data set to that group. That's real-time dashboards with Azure Stream Analytics. Moving on to the topic then of custom visuals. So let me open up Power BI Desktop. As a free download from Microsoft, be it 64 or 32 bit, this is the companion application for the Power BI service. So its strength is allowing you to connect to a wide range of data sources. Using query logic then a very expressive query language to acquire, filter, shape, cleanse and bring in results. Every query that you bring in can come from a different source and ultimately you can create relationships and there's the integration story going on. And then with the power of DAX, data analysis expressions, you can build business logic, calculator columns, measures, time intelligence functionality and essentially what you're producing here is a model that integrates data, embeds business logic, adds hierarchies for navigation and then you'll visualize that data. So let me just do something very, very simple and that is to bring in some data from a local SQL database. And this is the important thing. You have the ability for certain data sources to import and therefore copy the data into your solution or simply a direct query which says that every request made by a report or indeed a dashboard will flow direct down to the source itself. Now if you stop for a moment and think of the benefits and in fact there are benefits from either approach depending on what you need to achieve. But the direct query opens up the potential for you not to have to be concerned about processing, not to have to be concerned about excessive volumes of data. And the direct query is open for SQL database on premises, analysis services, multi-dimensional or tabular models and in the cloud it's available for three sources that would be Azure SQL database, Azure SQL data warehouse and Azure HD Insights Spark. So you can have real-time querying across big data loads here on Power BI Desktop. Let me just choose for now it actually won't matter too much. I'll go for direct query. There are some limitations with direct query and that is you may not integrate multiple sources together. Your whole solution has to be pointing only at one. So I'll go ahead and connect using my Windows authentication. And then the navigator says well the next level down from a server will be the databases themselves. So if I just bring in something pretty simple like a fact table, factory seller sales, one table, I get a preview and I have the ability to load all of this data which is probably not a smart thing to do on a fact table or use the edit here to refine what I really want. So it presents to me in the query editor then the fact table and I'm going to keep this so simple that all I want to analyze will be sales amount by time so I find that there is an order date column somewhere here. Maybe not. Let me bring in dimry seller. Order date, order date. Okay, in this case we have three date columns because it's what we call a role playing dimension. There's a date dimension in the data warehouse but there are three foreign keys pointing to it. So I make the decision that it's actually the order date. Now what you see here with these links represent a foreign key enabling you in the query editor to introduce columns from that related table. For example, let's bring in the calendar year and let's bring it also the calendar quarter. Now I don't want all columns that are here so if I just bring in those two with a multi-select, right click remove other columns and I've narrowed down all of that data to just the three columns of interest, how about then I rename these to make them friendly. I'll rename quarter year and sales. Over here the applied steps represent from the sourcing of that raw data right down to the desired state that you want. And you could customize this logic. There's a very powerful set of expressions and functions and even the ability to function with parameters in here. For now I'm just going to load this and load wasn't the right term. This is direct query, no data moved. But it has now an understanding that I have a table that we can see here called factory seller sales. So sitting here on the report canvas then I'm going to go ahead and drag year and ask that to be visualized as a slicer. Enabling then filtering interactively on the report, that's 2012 and then I could show sales by quarter. And quarter should go on the axes. Alright so that's the old BI, right? How else might we choose to express the data? Now what you'll notice on the visualizations pane is there's some 26 out of the box but far more interesting and what we'd hope to inspire you guys as developers is to consider that you may create custom visuals and you may be able to integrate them with Power BI. Alright so what I'll do is go to the gallery and what you see here is a contribution both from Microsoft but also from other developers out there that have been generous enough to share with the world what they've developed. That's not to suggest that what you develop has to be published here. You may keep it private but I think one that's just sort of eye-opening and not so traditional is the Enlightened Aquarium. Alright so we have a closer look, we read about this and we say okay well let's see what this will do for our data. Downloading the visual, okay you need to trust whatever you're downloading and what it downloads then is a Power BI visual file. And then very, very simply here on the visualizations click the ellipsis and I'm sorry that that doesn't draw correctly sometimes. It's basically a warning with an okay and then it says well where is that file and then I add it to the toolbox and now we have the aquarium visualization. So perhaps what I do is just keep that bar chart selected and switch it across to become an aquarium. What do you think? Which would you prefer a bar chart or swimming fish? Alright by hovering over a fish he's going to stop and it's going to tell you the amount of sales that were achieved. Let me point out also in the service itself when I'm in the reporting you have a near identical experience here in the web browser. So what you can achieve on Power BI Desktop I can come in here and granted I have permissions and that I own the report I may edit. Let me introduce a second page here and I can do exactly the same thing. Let me go ahead and import that visual and use it to display what the survey results look like with fish. Okay. Or maybe not. Okay I think we know what that will end up looking like. That's the story with custom visuals. The visual gallery is growing week by week and this is Microsoft's approach of saying look we're not taking full responsibility we'll put it out to community and others. But do be aware that the visuals that Microsoft have produced they have made all of the source content available for you on GitHub. So anything that Microsoft publish and is available in Power BI Desktop you can gain access to that provides insights and understanding and you may learn from that and then start designing and developing your own visuals. The last topic that I'll cover is with the new Azure Power BI embedded service. Now this is a service in Azure and there's some confusion already up front. I've just shown you Power BI as a service but what you need to be aware of that when you sign up for Power BI you're using typically your workplace email address. If you've got Office 365 you already have a tenant and Power BI operates within that tenant. If there is not a tenant the first person that signs up will behind the covers create a tenant to support the Power BI and the provisioning of workspaces for individuals and for groups and for sharing content within the organization within the tenant. This service is different. This is an Azure service that allows you to provision your own workspace collections and your own workspaces whereby applications will be using them not individuals not groups. So it is designed to allow you to create compelling and interactive reports to embed these into your web apps and it supports quick and ease of management and deployment. Let's just walk through the key attributes and that is you will still use Power BI Desktop as an authoring tool. The requirement will be that you will build a Power BI Desktop file preferably using the direct query mode and pointing to your data that resides in a cloud source. Presently and in preview it won't support any on-premises sources. No word on where that's happening if that's going to change. So that would mean today that you can connect to Azure SQL Database, Azure SQL Data Warehouse and also Azure HD Insight with Spark. So you would create your Power BI Desktop pointing to those and my attempt to do that failed because this VM isn't reaching the service. I do not know why. Alright, so what's attempting to do is connect and the IP address isn't good but I have definitely used the IP address of this machine but on my host that can reach it and the VM can't. So I'm sorry I'm very stuck. But at this point I'll ask you to use your imaginations where the demo would go. I would come in here and I would open up the Power BI Desktop. Perhaps I developed it against an on-prem SQL Database and by dividing the structures with the same I would point it to the Azure SQL Database with the right connection strings and credentials. I would save and close the file. So you have the full authoring capabilities but do remember that with the Direct Query Mode it's only one source of data that you can connect to. You then can use your own visualizations or the out-of-the-box visualizations to come up with the reports based on the Direct Query representation of your data. You can easily embed interactive visuals into your app using REST APIs and the Power BI SDK. So I mentioned all of the methods. What happens is that you provision an Azure Power BI Embedded Workspace Collection, you create a workspace within it and there's an API that supports the upload of the Power BI Desktop file to that workspace. The next thing that you would need to do is to apply a connection string. Because it will be using embedded credentials they will not move. So the API call will update the connection string to the Power BI Desktop now that it's in the workspace. The next thing is programmatically by making the right calls you can with relative ease add it into an application. So you will need to authenticate by providing a workspace key and an authentication key that will be provided from the Azure portal and then pretty much this just runs. What that means is that when people are interacting with your application embedded within it in exactly the same experience that I'm showing you here on the desktop, let me go back to the fish, whatever you lay out here is exactly what they're going to see in the web app contained within the frame of your application. Your user existing authentication authorization methods within your app and also between the data sources, this does not require you to build or somehow embed analytics and reporting. You need interactive, users need to change filters, need to click on things, touch on things, it all happens through Power BI. And then you're probably asking well how much does this cost? Because it's a very different pricing model to the Power BI that we have as a service whereby individuals and groups work within their own work spaces. So it's currently in preview which means that there isn't a cost associated with it, not while it's in preview, but the discussion at this stage without a commitment from Microsoft is that the first thousand renders per month will be free and for every thousand beyond that it will be $2.50 US. All right, and then you might ask well what is a render? That's a great question. If I've got a report with a filter that allows me to modify year, and let's suggest that I have one, two, three, four visuals every time you modify a year that is potentially four renders going on. And so if you do the mathematics and you think of the number of users, the number of interactions that they have, that will give you an idea. But then again it's not quite right because if they move from one filter to another and back again, your application will have cached this, in which case you're not paying for something that has been previously cached within the same session. So I cannot even give you an idea about how much this might cost, but it's certainly something you can monitor from the Azure portal. That when you look at the Azure Power BI Embedded Service, you will see a collection of workspaces within a workspace you can drill in and you can then understand what the embed renders are at any point in time. Within a minute it reflects the up-to-date number of renders and I guess with every other type of a subscription you could set a cap at some point if you said look that's where we'd like to stop. What was this intended for? Alright, it's intended for a very different purpose with Power BI that I began talking about. While both are relevant for developers the scenario with Power BI Embedded is that you have a need for interactive rich visualizations delivered through reports within a web app and now as a service you may embed this and you will pay per usage not per flat rate per month per features as is the case with the service. Are there any questions about that? Alright, so essentially as a developer you'll go into the Azure portal you'll provision a workspace collection. The next thing is you will start developing a Power BI desktop file using direct query against an Azure source. By the way you could import data but if you publish that to the workspace it's static. But it's not beyond possibilities that you would actually just automate a push. It's not designed to do that but if you then refreshed it on the desktop interactively, opened it, refreshed, saved and closed you could just push it up maybe once a week once a month. But it really is designed to reflect real-time data coming from the application's store which can be achieved through the direct query mode. Alright, so with the workspace collection, provision workspaces upload the Power BI desktop files, configure connection strings and then go ahead and embed into your applications. Any questions? I'm sorry I'm running early without demo space there's not a lot I can do and I'm really really sorry for that because the demos shine. They absolutely do but nothing that I can do without the internet. Let's wrap up with a review and then I'm happy to go to the dashboard and share some information back to you about the survey results that you submitted in. Power BI as a service and take note this is a constantly and rapidly evolving service. Many new features, many new capabilities being delivered month by month but as a cloud based service it's designed to gain or give access to users to data and that data could come in different shapes, sizes, formats, different locations be it a prem, be it cloud. Power BI desktop really brings it all together while you can from the service connect direct to services and get reasonably compelling experiences, Google Analytics, Dynamics CRM online there are built in services and I should point these out to you that provide immediate visibility on your data. But in my thinking and coming from a business intelligence background it's really the Power BI desktop that allows me to do some amazing sophisticated things. I can connect to those very same sources from the desktop, integrate with other sources of data and enrich with logic, rich with reports and publish. Watch this list it's growing week by week with different services. I've demonstrated how that when you have a data set in your workspace you can build reports dashboards and you may use the Q&A natural language as I demonstrated to ask questions. Also be aware that that only works against data sets that are Power BI desktops that have been uploaded or push API scenarios as I've described to you in this session. The two licenses so free you can still work with this in a development scenario but typically it's going to be the Power BI Pro that's going to provide the full feature set to work with the service and certainly when it comes to the volumes of data you might be pushing if you need that one million per hour then it's going to need to be pro. Give consideration also that Power BI when you create these dashboards supports sharing them to others within the organization. And if you have a mix of licenses which I'm finding with customers is quite common then you've got to be careful that if you produce a dashboard that is using any pro feature and you attempt to share that to a non-pro user then they won't have access to it. So the discussions I have with the customers will be think about the pro features and think about the audience that you need to push dashboards out to. There's a complete pricing matrix that says what you get under each tier and I will summarize that here by saying that if you're prepared to pay the $10 US per month per user you'll get 10 gigabytes not one gigabyte of storage. When it comes to a data refresh scenario through the gateways so Power BI desktop uploaded to the service but needs to keep data current from on-premises free and I think this is generous. You can do this once a day at no cost but if you need more frequent updates the support up to 8 per day under the pro license. Relevant for the push scenario with the API is that you can do this for free but no more than 10,000 rows per hour and the streaming rate for pro is a million rows. Beyond that there's nothing really that directly impacts upon the development scenarios that I've spoken about. All the Azure Stream Analytics so the Azure subscription needs to be within the same org as the Power BI tenant. Alright so that's one thing that you would need to work with. Otherwise you would have to build your own service that would read the streaming result and push it using the REST API. So what I hope I've inspired you with in this session develop opportunities to develop what you need. You're not happy with the visualizations go ahead and extend and build your own. You may be not happy with the data that the REST source has supported. Perhaps you have some legacy system that there's no connectors to. Well take on the responsibility as a developer to read from that system and to push the results up and build a service that would manage that for you. Integrate with Azure services and embed with the new Power BI embed service seamlessly into your applications. Some resources that I'll leave you with and I think it's very important to keep your finger on the pulse of what's going on by reading the blog periodically. The reason for this is that the Power BI service itself goes through release every week. One of the frustrations I have is I maintain training content on this and it changes week by week which involves constant rechecking that screenshots haven't changed and so on. This is a great thing because what Microsoft are doing are reflecting the needs of customers. And so I'd like to point out to you at this stage that if you go to here, there's a whole community around Power BI and when people come to me and say can I do X, can I do Y and if not when is it coming? I just direct them straight away here and say if you've got a need or an idea then go ahead and register that. But before you do maybe search on that, you know, people are asking about I need Power Point to have live interactive Power BI embedded into it. And for anyone that could be familiar with PowerView which was a reporting services interactive tool, it did this on-premises with SharePoint and people want it. So what you'll find is that people are putting up those ideas, they're voting against it and you'll see the more people that vote, Microsoft will respond. And here, great news, yes, we've acknowledged this and work has started. So when I read the blogs, I will expect next month or the month after that there's going to be something that says hey, in preview, we have this new feature. And row level security is another one. There were enough requests that it's driven a whole new feature to be added to the to-do list. All right. Also, Power BI Desktop goes through monthly updates. So you need to download and install that to keep Power BI Desktop up to date. The great news about that blog is that they provide you a video and all the new features are described and demonstrated to you. All you need to do is watch it to keep up to date with what's going on there. There's a Power BI Developer Center. So all of the topics that I've shared with you today are documented and examples are available to you here. There's the Power BI Rest API console where they document all of the current API providing examples to you as well. And more news about the Azure Power BI Embedded. It's very new. It's available at this link here. Lastly for the custom visuals, the GitHub project would be pretty valuable. Download, see what Microsoft developed for their very own visuals. And if you simply want to borrow others, then the Power BI Visuals Gallery is available from here. If you haven't done so already, I think about one-third had said that they'd used Power BI at the beginning of the session. Power BI is available for you to sign up for today at no cost. Maybe you stay at no cost. Be aware that there are a couple of limitations when signing up with your email address that you cannot use public domains. So if a hotmail or Gmail won't work, typically it will need to be an org. It couldn't be a.gov and it cannot be a.military. Alright, so there are a couple of restrictions. Others that try to sign up without success find that it's because they're organization of lockdown self-registration. Alright, so there's a blog entry or rather a documentation article that will talk about the possible problems and the fixes for those problems. But otherwise I find that most of my customers have no problem going in there, signing up for free and going up and taking the 60-day trial to experiment with the Pro feature set. Alright, we have four minutes left. Are there any questions that I could answer at this stage? You just have one. What is your question? You just have one. What is your question? Normally when you create an important RBI, we share that you are able to access that report. So what happened if I sent this order to my colleague at the email and he had to forward this to us? So what you're describing there is a scenario which is called and sorry let me just show you under reports. With a report you could publish to web. Is that what you're describing? Yeah, if you have the exact you would allow it to be. Do you need an application to somebody? That depends. There's two scenarios. So if it's sharing a report, you could look up through the API and get the details required to embed it in an authenticated scenario. Or this that's still in preview and in fact I didn't mention is what they're building out is the fact for you to publish an interactive report to the web that would be publicly available. You can experiment with that today. What you will get is an embed code and you may manage those embed codes. So for example you gave it to somebody and you want to revoke that. What you'd have to do is delete the code and regenerate it and then hand it to the people that you intend to keep that code. So watch this story moving forward. I didn't mention it because there's still a lot of lack of clarity around it and certainly with pricing. If they're going to allow something to be anonymously accessed, imagine you place it on a blog and that blog entry goes viral and a thousand people are trying to hit it. I don't know what Microsoft are thinking here in terms of pricing because everything in the Power BI service that I've described earlier in this session is a flat rate per month. So all of a sudden we've got a feature that looks like it's going to be on some billable unit. All right. So does that help answer your question? All right. Are there any other questions? You're welcome to, if you would like to, interact with this dashboard by submitting events. And thank you very much for your time and attendance and patience. I do apologize for connectivity issues but I'd encourage you to go to PowerBI.com. There are a lot of videos backing up and describing the feature sets that I've mentioned today. Thank you and all the best with your Power BI adventures. Thank you. Thank you. Thank you.
In this session you will learn about the exciting new generation of Power BI delivering access to data and insights with Software as a Service (SaaS), and specifically what can be achieved by the software developer. You will learn that the new generation of Power BI enables developers to configure real-time dashboard scenarios, create custom visualizations and also integrate rich analytics in application experiences by embedding Power BI reports into we apps. Several demonstrations will illustrate how to configure real-time dashboards that programmatically push data to the Power BI service by using the Power BI REST API, and also how to integrate Power BI experiences with applications. This session will be of interest to architects and software developers looking to understand and evaluate the developer capabilities of Power BI.
10.5446/51844 (DOI)
Are we live now? Can you hear me? You can. Good afternoon. I can hardly hear you. You have no energy. Some of you got pops of popcorn, ice cream. It feels like you come to the movies, right? I mean, that's what it kind of feels like. Okay, so I'm Andy. I'm going to be talking to you about simplifying thread safety. So the idea being here that, well, generally what happens is as you get more concurrent, your code gets more complex. So you kind of know, I don't just want to use a single big ass lock statement. I actually want things a lot more fine grain. And the moment you do that, your code starts to become ugly. And we don't want that because the average dude can't handle highly concurrent complex code. They can handle your general business logic fine, but they can't handle the quite complicated concurrency stuff. So that's what we're going to try and do today. We're going to go for a series of examples where we use, we'll take something, make it complicated, and then hopefully we'll throw in some of the types in the.NET framework. And hopefully they'll end up as a thing of beauty simpler and hopefully even better, faster. So it'll be simpler and it'll be faster. They're kind of a win-win. So we're mainly going to look at these three types of things. We look at lazy of T. Who's using lazy of T? It's the most awesome type on the planet. The name doesn't do it justice, does it really? I mean, lazy of T, you'd be thinking, well, you know, but actually you see it's a pretty awesome type. Concurrent collections, we've got concurrent queues, concurrent stacks, concurrent bags. Yeah, we're not going to use all of them. We'll play around with a few of them, can dictionary and bag. And then finally we'll finish on immutable collections. Immutable means, yeah, when I pause, that's a question, right? You can shout the answer out, you know? So immutable means can't change. From a more authority perspective, we like not changing because after we're about locking. But immutable collections, again, doesn't really sound exciting because the most thing you do with collections is add, remove, manipulate them. So immutable kind of a bit weird, but we'll see actually it's a pretty cool thing really. So like we said, single-threaded code, business logic, running again, standard collection types, storing data in some state inside an object. It's not that hard to write. The moment you say you want your business logic to be multi-threaded, then effectively if you want it to be safe, you have to start putting some synchronization in. And ultimately that's when people start to get a bit of a ride. You either end up with something very simple, it doesn't scale, or effectively you end up with something that is way too lax on synchronization. And it scales beautifully, but that day in production, it just goes bang, okay? And that's what we're going to try and get rid of actually. So we want to write code that's highly scalable, not complicated and guarantees to give us solid results every time, okay? So typical places where people need to throw this kind of stuff in is we'll have mutable shared state, have multiple threads talking to the same object, that kind of stuff. And quite often that mutable shared state could easily be inside one of our standard collection kind of classes where it's a dictionary, whether it's a list, a queue, a stack, anything like that. And historically, if you use the regular collection types, you end up having to put your own kind of synchronization logic around it, okay? We're hopefully going to get rid of that. We're going to start using the concurrent collections in order to simplify that kind of processing. And like the final thing we're going to talk about really is this idea of mutable collections. This talk is normally an hour and a half long. It's only an hour talk though, isn't it? So I've had to drop things in terms of topics. All the other terms is I miss every third word, but they didn't rehearse very well. So we're just going to miss bits out, I guess. Okay. So the first thing we're going to talk about is simplifying some code, okay? So I need to talk to you a little bit about this piece of code here first. It's a CSV repository. I assume you all know what a CSV file is. If you work in a bank, it's like the universal file form, isn't it? Because Excel supports it. So it's a CSV repository. You give it a directory. And the idea is it's going to load all the files from that directory. And then it's going to load them into a cache where the cache is keyed by the file name. And the other part of the cache, obviously the value in the dictionary is every single row inside the CSV file. Every row is represented by a string array. Okay? So inside the constructor, all we're doing here is we're loading up every single file into memory. Okay? We then got a map function. All that's going to happen with the map function is you tell it which file you're interested in. You give it a function that can take a string array, in other words, a row, and turn it into the type of object you want to map that CSV row into. That's all the code does. It's really not any more complicated than that. Okay? So that's what that piece of code does. Inside the program.cs here, we basically got some code that's going to go through every single file in the repository, load it, and attempt to map every row, and tell me how many rows there are. And at the end of it, I'm going to print out how many files we've loaded. All the loading is happening inside the constructor, rather than me just calling an open method and applying a strategy here. So every time we effectively open a file, I'm incrementing the counter. So in a minute, we'll start doing this multi-threaded. And the hope is that, because it's a cache, we only want to load each individual CSV file once, okay, even though we're going to have lots of threads hitting it. So we've got a little bit of infrastructure. If we run this code now, hopefully we'll see every file gets loaded. And there it does. And there are 37 CSV files. And it doesn't matter how many times we run this, we're going to see 37, okay? All the code for loading is happening inside the constructor here, okay? So inside the constructor, we're loading every single file into memory. Okay? So that works fine, okay? Now, obviously, if I just change this to use parallel.4, parallel.4 is just kicking off 10 workers in parallel that hopefully you're all going to do the same thing. So effectively, we're going to process 370 files, albeit 37 unique ones, effectively. But they're still only loaded 37 times because, well, the loading is currently happening here before we even get to the multi-threaded bit, okay? So go run it. There's a lot more processing, but we're still only loaded 37 times. So that kind of sucks because really, I don't really want to load my files ahead of time. I actually want to load them on demand. So when someone says, I want to look at the, these are weather CSV. If I want to look at the weather in Yoval, then now I want to load the CSV file. That's the kind of plan we want to get to, okay? So obviously, the one easy change I could quickly make to this piece of code is rather than initializing the dictionary and actually loading the data, we comment that line out. We'll put another line in. Simply says, file name goes to list of string array. Okay? Okay, so we're not actually going to load each end in entry into the dictionary now, okay? Well, that's obviously going to cause problems now down here, yeah, because clearly this isn't going to work, okay? So I'm going to comment that line out for now. This was the simplest thing we had at the start. As we go through this, we're going to evolve this to be more and more complicated, but hopefully when we throw in some magic a bit later on, we're going to end up right back at that same line of code. That's the plan, okay? So we're going to start with something simple and we're going to evolve it to something more complicated and then hopefully go back to simplicity again. So I'm going to comment that line out. And so now I obviously have to do something like this. CSV files, that's a dictionary. Data file, I think it's called, yeah. If that's equal to null, then clearly I need to load it, right? CSV files, data file equals load data, data file dot to list. Okay? And then I'm just going to copy that line of code. Okay. So now if we run it again, going back here, I'm going to get rid of the parallel stuff. Let's just prove it works on a single threaded application. Now we're loading on demand. We're not loading ahead of time. Let's go and compile it. Let's go and run it. Still got 37. Life is good. However, if we put the parallel behavior back in, compile it, run it. 42. That's a good number, isn't it, really? Do you know how long it took me to rehearse that? I mean, that just took hours, right? 46. 42. So, yeah. So that's not good, right? Okay. And the reason is, there's an inherent race condition here, right? It's a piece of multi-threaded application. We've got an inherent race condition that effectively means that if two threads arrive at this point, both see null, both move forward to here, both end up loading the file twice. Okay? All right? That's clearly not what we want to have happen. So the simplest answer is one big ass lock statement, right? Okay? So we could say CSV files here and let's lock it. Okay. Now it's Uber safe, okay? Okay. I shouldn't really say that. I was in South Africa the other week and the local taxi drivers decided to take issue with the Uber drivers with rocks and guns. You know, that's not really a good thing, is it, really? Okay. So that's loaded 37 times this time. And if you run it again, 37, 37. So three times, all we got the same result, clearly works. And it does. It's completely thread safe now. But it kind of sucks, doesn't it? Because really, at the end of the day, any thread that comes into here now is going to take that lock. If it wants a CSV file that's already loaded, it still has to queue behind another thread that's in the process of loading. This kind of sucks. We want more granular locking here. The problem is, what do we lock on? Things in the dictionary currently is a null reference. You can't lock on null and we can't lock on the keys because there's no quick way inside the dictionary to hash on the actual keys. So what do we actually lock on? Well, the reason why we don't want to put the CSV file in the dictionary is because it's expensive. But what we could put in the dictionary was a placeholder, a cheap object that would kind of replace the use of the CSV. Now when we need the CSV, actually go and actually load it for real. So I'm going to create a public class here called a virtual CSV. And inside here, we're going to put a list of string array and call it writes. Okay? And then moving down to here now, instead of having a dictionary of list of string array, we're simply going to have a virtual CSV. Okay? And then finally down in here, we'll replace that line of code. We've simply file name goes to a new virtual CSV. Okay? So we now got a placeholder. That gives us the opportunity now that we can actually lock on that. Okay? But that means that this obviously never going to be null now, but obviously we used to say rows. Okay? And then we'll sign the rows here. And we'll move that back to rows. Okay? Cool. It compiles. The asset test is we get 37 every time. Sure enough, once, twice, and three times. Okay? So that gets better. We've got more granular locking now, but it's still not good enough, right? Because yeah, it's granular, but you know what? When's the only time we need to get a lock? The only time we care about locking is at the point of loading the file. Because once the file's in memory, all we're ever doing is reading. And if ever, all you're ever doing in a more thread application is reading, you don't need to get locks at. Okay? So really what we need to do is say, only get the lock if we currently is not loaded. So the easiest way for us to do that is to replace it with CSV files, like a data file. And if only that's equal to null, do we move forward and do that? Okay? All right. And again. Oh, sorry, missed one out. That's dot rows. Yeah. Okay. All good. So if you go run it. 37. 37. 37. This technique is known as double check. Locking. Because it kind of looks like you're paranoid here, right? We check if it's equal to null. If it is, we lock. Once you get the lock, we then say is it still equal to null? Yeah? And the reason is we could be waiting for the lock while someone else is in the process of loading it. And therefore we need to reevaluate afterwards. It's not just paranoia. Yeah. Okay. So here we have a piece of code now that's reasonably well tuned for a multi-threaded application. Here's the block of code we just written. Yeah? This was the line we started with. Which one is simpler? Hopefully you think the bottom one is better. Okay? We're not just worried about job protection or any other issues you want to worry about. But in terms of simplicity, clearly that bottom one is far simpler. And it's far easier for someone to read and understand. They don't have to know the interest of double check locking and understand why you check twice and all that kind of stuff. Why have you got this weird thing called a virtual CSV kind of sitting in the way? What we're going to do in a minute is we're going to refactor this in a minute and we're going to end up virtually back at exactly that same piece of code. But we're not going to write a locking strategy ourselves. We're going to use that type called lazy of T. Okay? But before I put lazy of T in, let's have a quick look at what lazy of T is. Let me get a quick look at that. Okay. So here we have a simple person class. Yeah? It's got a name and an age property. I'm just going to put some code in the constructor that says created. Okay? Let's say person created. Okay. And inside here, we're just going to simply write a piece of code that says person. Let's do Andy, I suppose. New person. And then I'm going to say CW initializing person. Andy.name equals Andy. And Andy.age. 21, right? Okay. Let's do Andy. Okay. Cool. Okay. So if I can run this code now, no big surprises, right? Yeah, we've got person created, initializing person, and we've got the name and stuff. Okay? So this type called lazy is basically a bit like that virtual CSV. It's the idea that you have a lazy of T, a lazy of something. Lazy is a placeholder. You create a lazy object somewhere and you pass it to anybody else. And the moment that person says, I need the data that's behind that lazy object, they then call a value property. The value of their property then causes the object to be created. Okay? So if I change this now, instead of being a regular person, if I change that to a lazy person, what does he say? That's a bad choice of name there, wasn't it? Okay. Jeez, man. Should have seen that coming. Right. Okay. Right. Initializing person. Okay. So the change in behavior we should see now is previously we saw person created, now we won't see person created. Okay? We'll actually see initializing person first. There's no name and age property on the lazy object. So in order to get to the value I need to do the value. Yeah, and value returns me in this case because it's a lazy person, value will be of type person. Yeah, and I can do value. age. So we should now see initializing person, then we'll see the person created the moment you hit that bit. And then we should, everything should get initialized correctly. And if value keeps returning back the same value every time, and you will be initialized with his name and his age correctly. Okay. Yeah. All good. Yeah. All happy with that. Okay. Okay. Interesting side effect there. A lazy T note is a two string just called value.toString. I never noticed that before. I wish you'd put value in, but there you go. You don't need to. Okay. All happy with that. Yeah. So that seems to work nicely. What happens if we did this, and a string name. So now we're initializing via constructor. Okay. If I get rid of that bit out here. I mean, is this going to compile? I mean, it's a bit weird, isn't it? I mean, clearly lazy can't work right because I don't have a public parameters constructor. But there's no generic constraint here. It still compiles. This is a sad bit. It doesn't run. Okay. But that running is kind of important, right? Because it tells you something about the technologies that they're using under the covers to make the lazy object effects from getting instantiated. And they're using that high performance technology called reflection. Okay. Yeah. That thing you'd always want in the Moistreaded application is to invoke reflect reflection. So this kind of sucks. I'm not really sure why I never did this, but I guess it's good for a demo, I guess, if you want to hide it. But anyway, so it's using reflection. That kind of sucks. It also sucks on the level that there's not many objects I can think of in my entire system today that has no parameters at the point of construction. It's a pretty rare scenario, isn't it, really? So you generally are going to need that. So the lazy object actually takes a factory function. Yeah. So inside here, I can actually do this. Goes to new person. So now, rather than using reflection, the fetch is going to use that piece of logic in order to create the value that gets assigned here. So we should be back in exactly where we were before. Initializing person, person created, and Andy has a name. Okay. The thing that makes lazy attractive is the fact that it's completely thread safe. If you give the same lazy object to multiple threads, you will only ever get a single piece of creation logic run once, and hence a single object get created. It's a singleton style behavior at this point. Okay. All hidden away from you. It turns out there's three modes for creation that you can use in. The default is what I've just done, which is execution and publication. The other modes are thread safety. Yeah, lazy thread safety mode. So we've got execution and publication. I should assume this would be easy. You're going to work? Yes. Yes, we've got execution and publication, non-publication only. Execution and publication says we'll only run that piece of creation logic once guaranteed. Yeah, and we'll publish one object. Publication says, you know what? If two threads go into that value at the same time and the object hasn't been created, we'll run the creation logic on one thread and we're running on another thread. Which ever one wins is the value that we take forward. When the second one comes back, it goes, sorry, dude, too late. Throw it away. This is the value that we're using. Okay. Yes, and it's that. The other mode is non. This is if you're using it in a completely unthread safe environment. Okay. So pretty unusual, but if you did, that's what that effectively means there. But the default is execution and publication. Okay. All good? Yeah? Okay. So that's lazy of T. Well, taking that little bit of knowledge that we just learned from that, we should be able to go back to exercising CSV thing. That's a startup project. And going back into here, rather than use our virtual CSV anymore, we can actually get rid of that now and change that to be a lazy of list of string array. Okay. So now we've got a lazy object that's going to go into the dictionary. Okay. And the nice thing about that is now we'll simply replace that with. Go to new lazy of one of those goes to load data, far name. Okay. So now basically we've done exactly the same thing. We preloaded the dictionary now with a cheap object. It's really cheap. But the benefit being the fact that this understands synchronization is all that locking code that we wrote previously, we won't have to do anymore. Because lazy basically encapsulates all that logic for us. So going back down down into here, we're not going to do any of this jazz anymore. We're just simply, oh, and that one. We're going to take that original line of code, which we all agreed was the simplest thing on the planet. Okay. And control. And all the only thing we have to change to this, and I know it sounds a bit rocket science, but there you go, put dot value in and we're done. Compile it, run it, 37, 37, and 37. Okay. That code now is as simple pretty much as we started with. We're letting lazy of T take care of the thread safety issues, leaving us to concentrate on the business logic, which is the kind of thing that we miss people to be focused on and care about effectively. Okay. So that's lazy of T. So we have concurrent collections. So concurrent collections really come about because, you know, there's not many applications you couldn't write that don't have some kind of collection where there be a list, a dictionary, a stack, or a queue. And historically, if you want to run those in a multi-threaded environment, you have to put your own application level synchronization on. You had to do all that yourself. As a.NET 4, they release concurrent collections. These types effectively worry about concurrency for you so you don't have to. Okay. And historically, before that, we used to have these things sort of synchronization wrappers that used to sit in front of the regular stuff. Okay. So this is a queue class that they had in.NET. It's not thread safe. Okay. So you can do queue. That's cool. Okay. And if we decided we need to make this running a multi-threaded environment, we might naively say, as long as we create a synchronization proxy that sits in front of the real queue and we give the synchronization proxy to all the code, when you come to the synchronization proxy, it gets a lock. And then if it gets a lock, it forwards it through to the real queue. And therefore, only one thread at any one time will be inside the real queue. You'd be thinking, this must be thread safe, right? But unfortunately, just the way the API is designed, it's just inherently not. Because if you have another thread of execution, before attempting to dequeue, it has to make sure there's at least one item in the queue. That's two operations. That's two discrete locks that will actually go forward for the synchronization proxy to achieve that goal. And when you combine that with a second thread, we've got a classic race condition. Two threads, both see the count is equal to one. Both move forward. One is successful. The other one is disappointed and receives an exception. Okay. Effectively, you'd have to change the, has it got saying in the queue and dequeue as a single operation, through the synchronization proxy, if that was to be true. So when they came to build the concurrent collections, they decided that we can't use exactly the same APIs that you have with queue today. We have to have a slightly different set of APIs, right? And they kind of come in these queued type of families. We get a lot of try methods, try dequeue. You know what? Try and take something from the queue and if there's nothing there, return false. Don't throw an exception because it's impossible for me to know in a thread safe way ahead of time whether something would be actually in that particular queue and that's not going to give me a race condition. So you have a lot to try operations. We also have some more complicated ones. So we've got get or add. In other words, I want to put an item into the dictionary, but if some other person's already beat me to it and put something into that particular key, I want to know what it is. But if they haven't, this is the value I'd like you to put in the dictionary. Okay? So far bigger operations so that you don't have to do a lot more granular locking effectively in play. Okay. So going back to that piece of code that we just played with, this is our cache and it's okay. But at the moment, you know, when you create the CS3 repository, it goes and looks in the file system and it creates a key for every single file that's currently there. If two o'clock in the afternoon, one dude pops a new file in there and says, this is for importing, you never see it. Okay? Yeah, because it's too late. We've already created the dictionary ahead of time. What we want to do is build the dictionary in the terms of the values in the dictionary on demand as well. So not just load the data, put the keys in on demand. And again, we could write some rather horrible complicated code, but actually we can simplify this by simply using a concurrent dictionary. Oops, put the namespace in. So we're going to use a concurrent dictionary. Okay? And I'm not going to do any of that anymore. So we're not going to initialize it like that. We're simply going to say, CS3 files equals new concurrent dictionary. Okay? So we knew it. It's completely empty. I think it comes out of this piece of code. And I'm not going to pretend we're going to stay at that level of simplicity. It's going to get a little bit more complicated. But the level of complexity is simply going to be this. We're going to create a lazy object of list of string array. And we're going to go to load data, data file, and dot to list. Okay. And the brackets. Okay, cool. Okay? So that's okay, right? Because we said lazy objects are cheap. So we can create those. That's really cheap to do. Okay? So we always, when we come into here, we're always going to create the lazy object. Okay? And then all we're going to say here is, csv rows equals csv files dot get or add. The key is going to be the data file. And I'm going to say key. And I'm going to replace that with csv rows. Okay? So this is a very cheap object. We're going to get or add it to the dictionary. Yeah? But get or add. So if someone's beating us to it and there's already a lazy object in this dictionary for that particular key, we won't get that back. We'll get what was over there previously. And that's what that one would be. Okay? And then finally, we just need to say here return csv rows dot value dot skip one dot select. Okay? So now we've got a cache that builds up over time. Okay? In a multi-threaded environment, we don't have to write any threading code. We just get it written saying a quite high level form of abstraction. Okay? Go run it. 37. You'll love this number. You'll see dreams this number at the end of the week. 37. Okay? So 37 keeps going on. We're consistent, which is good. So that seems to work. But if we actually go and have a look here and get or add and go to the definition, and go to the definition, we'll see there's a couple of definitions for get or add. There's the one that we've been using which is effectively takes the value and that's the one that gets swapped in and swapped out. There's another one that takes a factory. And the main reason for highlighting this is because some people get misled because it's a factory, right? This is the code it's going to use to add the item to the dictionary. So you'd be thinking, you'd be quite reasonable to think, well, it'll only call that code if it's not in the dictionary already, right? I mean, that would be logical, right? However, it doesn't quite work like that. You see, if it's not already in the dictionary, sure, it will call that piece of code. If that piece of code is running and another thread comes in and does exactly the same thing and this other do doesn't come back yet, it'll run the creation logic again. This works exactly the same way now as lazy of T where all we care about is publication, not execution and publication. So you can't just say, well, we could throw away lazy of T now and just use that factory function. We still need to use that method and use lazy of T here. If we want to guarantee that we're only ever going to open the file once effectively. But what I can do for the real die-hards who say, you know what, this is simple enough. We can actually optimize this a little bit further. We don't have to create that here now. What I can do here is this. And instead of doing that, I can say, far name goes to load data. Oops, sorry, no. It goes to new lazy of list of stringer, right? And inside here it goes to load data, far name, and two list. Okay. And you can't have, okay. And what is that one, Jim? What have I missed? Where have I missed the parentheses? Oh, yeah, thank you. Okay. See, pair programming works, right? Okay, cool. Okay, that is the most optimal. Now it will only generate the lazy object if it's not in the dictionary. If two threads come in at the same time, yeah, effectively, you'll end up with two lazy objects. But again, only one of them will go forward effectively. Okay. All good? Happy with that? Okay, cool. Why isn't the lazy of T require? I don't know. That's just the way they built it. I mean, you might want publication only, right? You might decide, whichever thread does it the quickest, that's the one. If in my case, I cared about I didn't want double creation. Yeah? Okay. Yeah, yeah? Okay. Okay. Right. So concurrent dictionary, pretty handy thing to have. Just like a regular list class, if you create a list and you know ahead of time, you're going to have thousands and thousands of things in the list, you probably need to initialize the list with a big number, right? Thousands of things. So that you don't end up as it gets the list size, the list boundary copying and copying and copying as it doubles and doubles and doubles. Exactly the same thing for concurrent dictionary. Yeah, so if you skip a hint at the beginning, how big the concurrent dictionary is, then you'll certainly save time if you initialize it with large values. Again, there's two values you need to give it. One is to the size and the other one is how many threads you think at any one time will be touching this thing. Okay? So you need to give it both those two things. Okay. The next thing we need to talk about is concurrent bag. Anybody use concurrent bag? A few people. So a lot of people when they first look at the concurrent collections go, oh, dude, I want a list, right? But there's no concurrent list. And so the next thing they say, well, what is the list? Well, it's an ordered set of things. But we have a concurrent bag. Well, a bag is kind of like a list, right? Except it's unordered. So if I put an apple and orange and a banana into a bag, well, I might get the orange first, then the banana and the apple or the apple, the orange and the banana, I don't know what the order is. What I do know is all the items I put into the bag, I'll definitely get all the items out of the bag. Whereas a list, there's deterministic behavior. When I put something into a list of a given index, that's the point where I'm going to take it from. But there is no concurrent list. There is a concurrent bag. The danger is here that people go, you know what? I want to have multiple threads all touching this particular set of things. All they've got to do is gather some items. That's all I care about. And I want to do the thread safe way so some other thread later on can pull those items out of the bag. And so they go for concurrent bag. And if they use concurrent bag, they'll find it really sucks and blows. Yeah, it just is just dire. And that's really because that's not what concurrent bag is designed for. Concurrent bag has a very particular use case. And once you understand how it's implemented, it kind of shouts to you. So for a concurrent bag to work, it has to be highly concurrent. You want lots of threads at the same time. You have to add and take from the bag simultaneously. If you went through one state of structure, that's a point of contention. If you want to reduce points of contention, then effectively inside the big bag, we have lots of little bags. Each little bag is bound to a given thread. And in the most ideal case for concurrent bag, as long as each thread is only ever taken and removing from its own local bag, there's no contention. This is really easy. However, there will be times when you're adding work on this particular bag and then you want to take and you take, take, take. There's no more items, but there's still items in the rest of the bag. And that's all you care about as a programmer, the global bag, not the little bags. So in which case, we end up stealing from other people's bags. So you put your hand into your own bag, you don't find anything, you go rob something from someone else's bag. Okay? That's a point of contention. We don't want to do that very often, but we obviously have to to maintain the data structure of the bag. So what, what concurrent bags good for is effectively situations like divide and conquer, where I might take a piece of work, stick it in a bag, take the piece of work out, split it into little, little bits, put it in my bag. When another thread comes along to get a piece of work, it takes it from the bag and then further divides that, but that division goes into its own local bag. Okay? And as long as most of the time is being fed from its own local bags, life is good. So like I said, we had to try and cram this talk into an hour instead of an hour and a half. So this bit isn't live code, but hopefully it'll be all right. So we've got a piece of code here that's going to effectively walk this part of the file system, doing divide and conquer. So you go into directory, all the subdirectories will get added to the bag. We've got four workers. Each worker is going to take a sub directory. Any subdirector inside a sub directory will get added to its bag and will basically walk the tree in parallel effectively. Okay? I've got a couple of implementations for the heap in order to remember items you need to visit. This one is using a regular list class. Yeah? And all we're doing is a regular lock. Okay? So we're going to run this now. That's clearly not good. Let's go and run the right piece of code. And a parallel tree will go. Set a start project. And let's go run it. Okay, so hopefully that's going to run. Wow. It's taking a while. Okay, so that's 5.1 seconds. Okay? That's using a regular list and a lock. Okay, I'm going to go and change this now. So instead of using that, it's actually going to use a bag. Okay? This is the bag implementation. As you can see, it's pretty straightforward, right? There's no locks because it's concurrent. It does everything it's expected to be. So it's 5 seconds we need to beat. We can get down to 1.7 seconds. Okay? So, now, I don't need the code got a lot simpler. It's got faster, too. Right? That's got to be a win-win. Often we think, oh, if we keep the code simple, we might not get the best performance. But hopefully by using these data structures correctly, you'll get the best of both effectively. Okay. So, the fun thing we're going to be talking about is this idea of immutable collections. Again, immutable collections, it sounds kind of weird because the thing I do with collections most of the time is I add stuff, remove stuff, that kind of stuff. And immutable collection means it doesn't change. But from a threading perspective, life gets simple because I don't need to worry about locks. All I'm doing is doing is reading from the data structure. Code is simple. And we like that. Okay? It's also fast because there's no points of contention. Okay? So there's lots of reasons why we might be using multiple data structures. So we're going to play around. We're going to write some code now where we're going to try and build some immutability by hand or try to raise some locks and try that kind of stuff out. And then we're going to use move on to a new get package from Microsoft called immutable collections. This is written by the Visual Studio team in order to help them with Visual Studio with various parts. And I'll talk more about why they needed it near the end. Okay? So, let's go and change immutable collections. Okay? So that's product. Okay? So I've got some code here. They've got this thing called a bank. Okay? And our bank is really simple. All it's going to do is a simulation is we're just going to run some code on a thread. And it's constantly moving a random amount of money from one account and adding that random amount to another account. Okay? Would you like that kind of bank? It's like I'm filling Lucky Google bank, right? You put your money in at the beginning of the month and maybe at the end of the month you get more, maybe you get less. But no one, you know, overall is still the same, right? Okay? So that's the plan. So we run this. That's what it will do. And if you go and have a quick look at the code for that, yeah, it's like I said, it's really simple. There's no thread safety here whatsoever. But we notice that the bank does actually expose an iron numeral of accounts. So the idea is this. There is an object that has some quite complex state. Yes, it's got lots of things that make up its composite state. And then periodically we want to verify or we want to walk that state. Okay? We want to see what's happening inside the bank. We want to verify that the bank is still good. So what we're going to do in here, we calculate how much money we have in the bank when we start. And like I said, if we're moving money from one account to another, the same amount, we should always have the same amount of money. So we're going to kick off a very simple audit thread. And inside, whoops, inside here. And inside here we'll do something like, let's go and rub that. We'll recalculate how much money is inside the bank. Okay? And then effectively, well, that's why I say answer the console, right? So if total money in bank is equal to expected money in bank, we'll put a dot. Users like dots, right? Dots on the screen make you feel comfortable. Yeah? I mean, something's happening or formatting the hard drive. But either way, you're very happy, aren't you, really? Okay. Okay, so we're going to sit around here. Every time we've got the right amount of money, we're going to put a dot. Every time we haven't, we put an exclamation mark. That's the plan. Okay? We're going to run the code. Wow. There's not many times when we're happy, right? Most of the time, exclamation mark, okay? Because there's no thread safety here, right? You know, it's one of the threads is moving money from that account down to here or from there up to here. And we're trying to do an audit at the same time. And as it's moving money around, we're trying to scan through that set. I mean, it's almost like we want to take a snapshot. You want to say, give me the state of the bank at any moment in time, which is consistent, and I'm going to verify it's good. Yeah? That's really what we're trying to achieve here. So if we go and try and solve this, I mean, can we solve this with a lock statement? I mean, you know, can we put a lock around here? I mean, is that going to work? That's not going to work, is it, really? Yeah, I mean, because what's going to happen? We're just locking something that's unenumerable, right? I mean, that's not going to achieve anything, right? We can run it. We might think we've done the same, but no, it's no better. Okay? So the question is, how do we solve this? All these couple of things we could do, you know, when we lock the bank, when we do the read operation, we could copy the bank. What do you reckon? Clone the entire bank? Would that be a plan? Yeah? We want to do an audit. Stop. Let's clone the bank. Let's clone everything. Yeah? And obviously, the problem is why we're cloning it, which takes a long time. What can't we do? We can't move money around, okay? So we can't do transfers while we're in the process of cloning, okay? So that's an issue. We don't want to do this. So let's get rid of this lock stuff, right? That's clearly a problem, right? But what we could do is perhaps, well, rather than clone the entire bank, maybe we should actually, well, rather than just manipulate the current state of the bank, we could transition the state a little bit by just only copying in a deep way the things that have actually changed. Okay, that's the plan. But before I do that, I'm just going to go and get a benchmark. So we've got some idea about how quick all this stuff is running. So at the moment, we're doing an infinite number of transfers. I'm just going to change this now to do 100. Only there's a feature I could put in between that. Okay, let's do 100,000. Let's do that. I'll do. Okay. Okay. So doing that number of transfers, it actually took.11, well, just over tenth of a second, yeah? Okay, let's put that away somewhere. Okay. And if I go back to here. Oh, every time. Okay, cool. All right. So how do we fix it? Well, like I said, one thing we could do is really simply do this. Varn new accounts equals new equals accounts.to list. Okay, now that's just taking a shallow copy, right? So that's still cloned, but only the shallow copy, only the references is actually copied, right? And then what we could say here is accounts. Let's do, I can do this. New accounts, square bracket, source account ID equals new accounts, source account ID. Yeah. Do you like this? This is a bank account which has a balance property. You just change it wherever you like. You'd like that bank account too, wouldn't you? Right? My balance shouldn't be that. It should be this. Okay. And then we can say accounts, square bracket, source account ID, take away, stop balance. And we're going to rob that again. And that's going to be desk account ID. And that's going to be desk account. Okay. And that's going to be plus. And I screwed something up somewhere. Oh, yeah. No. Oh, yeah. Semicolons in the property initializers every time, right? Just feels natural. Okay. And then now once I've done that, I don't want to do that anymore. After I've done that, I just simply want to say accounts equals new accounts. Okay? So every time we do a transfer, we'll do a shallow copy. Yeah. We'll create two new, we'll replace the two account objects that have changed. Yeah? So anybody using the old copy of the list is okay because that still points to the same objects. Is that a question or a stretch? Oh, thank you. You saved my life. Thank you very much. Thank you. You know, the rest of them just wanted me to crash and burn, right? You thought you saved too good. Demo gods have been too good. Okay. But now it works, right? Yeah. So now I've got lots of dots, right? Which is good. Any downside we see in the moment? Well, remember how long it took to do those transfers before? Oh, shoot, man. That's that's three orders of magnitude difference in terms of what we've got. So just in case you would forget that, not that you would, right? But you know, in case you did. Okay, let's go and control see that. Okay, let's pop that back. Super safe. Okay. Okay, so we've got some immutability now. Okay, we basically said, well, when the reading thread loses the accounts, it's always seen an immutable collection of those things. If we want to do a change to that list, we do a shallow copy and we change the things that have changed and then we publish a new list. Okay? At the moment, we don't need any locks because we've only got one writer. If we had many writers, we'd obviously have to wrap that in a lock statement. But since we've only got one writer, we're good at the moment. Okay. So the problem is the thing is taking all the time really is the memory, right? We're doing a stupid amount of memory copies in order to achieve this goal of immutability. And so Microsoft come along with their mutable collections and said, we can do better than this. We can simplify this for you and we reduce the number of copies. Okay, so they produce immutable collections. Okay, so I think I have hopefully I have another project. No, I don't. Let's go back in here. So let's do a bit of an experiment. Let's just call this old main or something for now. And let's do SVM. Okay. Inside here, let's create a regular list, right? Okay. Values.add. Okay. Right. You guys are random number generators. You're going to give me some random numbers in a second, right? That's the plan. Yeah. Okay. Random numbers. Oh, there you go. I could have predicted that. I don't think I've been psychic. Any others? 32. I can see where this is going. So I do the next one. 22. 12. Any others? Sorry? 37. Okay. No one's ever done that yet, have they? See, that's not the number of the devil, right? You know that, don't you? The real number of the devil is that there's a bus route in Russia that was route 666 and the locals hated it and they partitioned the local authorities. They said, we must change this. And they did. And they changed it to that. And then someone realized, and this shocked me when I found out, they translated the Bible wrong. Okay. So one of the things they got wrong was the number of the beast is actually 616. So there's people in Russia that just screwed now. They're just faked, right? Okay. So we're going to run that code and no surprise, there it all works. Okay. So Microsoft come along and I've already downloaded the NuGet. I'm not going to do NuGet live on stage. Create this thing called immutable collections. So let's change immutable collections. How hard can this be? Var values. Okay. And then here I'm going to say, well, the first thing is you don't do a mutable list of int. And there's a reason for that. Because an immutable list means what? It can't change. If you can do up an immutable list, its initial state is empty. If some other dude could do the same, then have another object which is the same as yours. It's kind of nice. You just shared it because it can't change. So you might just share the same object. So rather than have a new constructor here, you can't do that. What we do do though, is immutable list are empty. Okay. Okay. So now we have an immutable list that's empty. Okay. Now the question is, will this compile? Sure, it compiles. What's happening when you're running this thing? Not the same. You see, what does add return on a regular list? Add returns void. What do you think immutable list add returns? A new list. Yes. So the way this has to work is this. I have to say values equals. Come on, mouse. Shoot every time. Okay. Okay. So now what we've basically ended up with is values equals values.add. That returns us a new list which just contains the 42 number. And then we call add on that. And that returns us a new list that now contains 42 and 32. And then when we call add again, gives us another list is 42, 32, 12, et cetera. Thankfully you picked good random numbers. I don't have to look up the slides. Okay. So that's basically what we did. But now when we run it, life is good. Okay. Yeah. So we ended with this immutable collection. Microsoft is doing copies. And if it did copies in exactly the same way we did, this would be a waste of your time and you should have gone downstairs and grabbed a coffee earlier, right? Okay. So we'll see in a minute. Well, hopefully we'll see some performance numbers that may suggest that they're not doing it quite as night and as to Lee as we are. Okay. So that's immutable list. Let's put that. Let's change that into immutable main. Okay. Okay. Let's bring this one back. So what we're going to do now is we're going to refactor again, but we're going to create a new type of bank here because look forward to your code interface. So we're going to create a new type of bank. We're going to call it immutable, any mutable bank and we're going to store the accounts not in a regular list, but in a immutable list. That's the plan. Okay. So let's go to the bank class. We're just going to see where we're going here. Okay. So let's go and create a new class. Public class immutable bank, which is a kind of my bank. Okay. So I like it said, I want to hold the immutable, I want to hold the instead of using a regular list like we've done down here, we want to hold it in the immutable list. And that's opposed to the question. You see, Microsoft have written the immutable list type and the immutable list guarantees that you can't add and remove to the same list and other than producing a new list. The thing is they have no control over the types that you put in the list. If you put mutable types in an immutable list, that doesn't feel good. Does it doesn't smell right? Does it and in a day, a list should have a consistent state, which is the items in the list in terms of there are 10 items, but also the data inside eating digital item. So really what we need to build before we even start playing with this is we need to produce a new type of bank account, which is going to be called a immutable account, which is a type of I account. Okay. And when you create one of these, we're going to give it an account ID. And we're going to give it a decimal balance. Oops. I'm just going to fix that. Okay. So we have an ID and balance. These are getters. That's fine. Okay. We see why these up. Why did I put capital B there? I don't know. And this lies that property. And this lies that. Okay. Right. So now we've got an immutable account. Obviously, you can't do anything with this. But what would you normally do if you bank account, you do a credit and a debit operation? Well, we're not going to simply say public for your credit here because that would imply that we're going to change the balance. Well, we can't change the balance. What we are going to do is we're going to return a new immutable account in the new state. So the method looks more like with credit. And what we're going to do is return from here is a new immutable account. And its ID is going to be whatever the ID is. And effectively, it's going to be balance plus amount. Okay. And then similarly, we'll do a debit one, public, immutable account. With debit, decimal amount. And that's returns with credit. Okay. Well, good. So now the account object has operations to change state, but it doesn't change its own state. It produces you a new object, which is that next step of transition. So in other words, the account is always immutable. Okay. So what that means now is we can safely do this. Inside here now, we can have private immutable list of immutable account. And say accounts equals immutable list.empty. Okay. Well, good. And then we're going to implement the part of the bank. Okay. Cool. Inside here, we obviously need to add the account, which is accounts.equals.accounts.add new immutable account. I'll just use the next number in the sequence. So that's accounts.account initial balance. Okay. So that's adding them to me. The operation here is really simple. Goes to accounts. Okay. Just return the accounts. And the transfer funds, well, that's where it gets a little bit more tricky. But again, no locks going to be involved here. We're simply going to transition from one state of the bank to the other. We're just going to say accounts equals accounts.set item. The item you want to set is a source account ID. And then we'll say account square bracket source account ID. Oops. And we've debit and the amount. Okay. Let's just bring that up here. Okay. Make sure I don't mess this up again. Destination account. We've credit. Okay. I think we're done. We've done everything right. Oh, no. Change. Okay. And we should be done. No locks involved. Nothing else. Hopefully now when we run it, it's going to be way more performant than we just had a minute ago. That's the plan. Okay. So simple code. Yeah. And hopefully more performant. Program.cs. So we're using the regular bank here. We're going to be using the immutable bank. Should still get dots, but hopefully quicker. We have to beat 13 seconds. And there we did. We got it into lesson two, well, lesson three seconds. So massive speed improvement and simple code. So the question is how are they doing this? Magic, right? So a regular listing dot net is implemented in an array. That's super optimized for indexing. Super good. Give me the seventh element. Brilliant. Link lists on the other hand. Not so good for rapid access jumping into the list. But in terms of being able to build immutable collections and minimizing the amount of copy, this thing rocks, right? Because there's the immutable list at the top, three nodes, and I want to add another node. We just create another list that basically has a reference to the previous one. It's immutable. It can't change. And we just tack the other one on to the current one. Okay. So this is optimized for write operations, effectively. Not for read operations. Read sucks here. Okay. That's why we did all those dots, yeah, before we actually got the value out. No, it's the first time we ran it. We had about eight, 10 dots. Yeah, a mixture of exclamation marks and dots. This time we had loads. Reading sucks. Writing really works really well. So the use case for this is this. I've got a complicated data structure that I need to periodically read. And when I read it, it's complicated. I need to look at lots of values and I need to see it as a consistent picture. But at the same time, I've got other people wanting to write to that data structure and they must be able to write quickly. Visual Studio needs this as you busily type your code. Abstract syntax tree analysis is trying to work out where things are, where variables are and all that kind of jazz. This is exactly what they used it for. So it's not for everything. Read sucks, writes really good. Similarly, deleting from this stuff. Again, super fast, super good. Okay. So where have we got to? Thread safety doesn't have to be complicated. Yeah, we can write really simple thread safe code using these new data structures, which means that code is nice and easily readable. Let Microsoft worry about the complexities of concurrency. They seem to do a pretty good job compared to the hand-rolling stuff that we've done. We've got simplicity and we've got better performance. So clearly a big win here. Okay. Okay. Final thing we need to say. Showing self-promotion. There's a book. I knew I shouldn't have done that slide from animation. There's a book. The book has lots of examples we've just done. I have a copy here. Historically what happens is I stand on the spot, I spin round in my eyes, close. You shout go and the book goes at that angle of rotation and ends up somewhere in the crowd. I'm not very comfortable on this stage doing that. Okay. So I don't know how to do this actually. I'm just going to close my eyes. I'm going to throw the book. If you've got glasses on and you don't want them broken, then you need to close it now. You're ready for this? Okay. Okay. I'm trying. I'll just, I'll spit a bit. Okay. That would mess me up a bit. Oh crap. Oh, there you go. Anyway, thank you very much.
When developing multi­­threaded, we need to consider thread safety when sharing state across multiple threads. These techniques require the developer to understand possible race conditions and select the cheapest synchronisation technique to satisfy thread safety. However, while this process is essential, it can often become tedious and can make even simple algorithms seem overly complex and hard to maintain. In this module, we will explore the use of framework providedconcurrent data structures shipped with the Task Parallel Library that will simplify multi­threaded code while maximising concurrency and efficiency.
10.5446/51846 (DOI)
Hey, everyone. Can everyone hear me all right? Great. Thanks. Some questions are like Facebook relationship status, right? It's complicated. And it gets very interesting when you start adding your parents there. But in almost every company that I've worked, every team that I've been part of, either I've been the only woman in the team or there's been like one more person. And I'm talking in the course of like 15 to 20 years of working in the tech industry. So at some point I started thinking, like, why is this? And this question started to bug me a lot. And I really wanted to find the answer. So before we get on, something about me to give you a little bit about my nerd credentials. I've been in the tech industry for 15 to 20 years. I started writing systems using Fox Bros. screen builder. Yep, really old. Moved on to C++ and C sharp. And I've written systems that biologists use to analyze DNA still today. And, you know, they use their QPCR equipment and get their data analysis and they use it today. And I've been part of teams that developed 911 emergency response software. And it's still active in major U.S. cities today helping people. So somewhere along the line I totally fell in love with even driven architecture and designing reliable systems. So I started looking into more messaging architecture. And then I joined particular software and I'm currently a developer at particular software, the makers of NSEVS bus that create APIs for developers that help, you know, create very, very impactful software. So if you want to talk tech to me, please come over to the particular software booth. We can hang out and talk tech all day. But today I want to share some of my experiences in how we can make the industry more diverse, more inclusive. So why is diversity important? Right? I mean, there are so many studies that have been conducted that show that teams that have more diverse people, people from, you know, different culture, race, gender can come, can come and find very creative solutions. And yet, you know, when we look at our own teams that's not the case. So I want, instead of talking stats and numbers, I want to share a few stories with you. This was in the 1900s. Mary Anderson, she's a woman and she wanted to enjoy New York. So she went to New York and she's in this little trolley car. And she's looking out the window trying to, trying to look out and absorb all of the magnificent New York, right? So what happened was it started to rain. And what she noticed was the driver of the trolley car had stuck his head out of the window and he's like frantically trying to like use his hand and to wipe away the rain all the while driving. This is the time and age when they weren't windshield wipers invented yet. Now, Mary saw this and she saw a problem there. And so she went back home and she was an inventor. So she invented this very simple mechanical device which had a spring-loaded arm where you could attach this rubber blade and it went and attached on the outside of the car. And the driver from the inside of the car could simply use a simple turn device to operate the wiper. And so she patented this in 1903 or 1902. And the group think or the common way of thinking at that time was like, woman, what are you thinking? This is very distracting to drivers having this thing, you know, on the outside. Like what, you know, what is possibly, I mean, what is wrong with this? Or what is so cool about this? We don't want this. Get away. Right. And there was another woman, Charlotte Bridgewood, who looked at Mary's patent and thought this is a brilliant idea. And but she thought that it could be improved upon. Like, you know, the driver still has to turn this knob manually. So she wanted to automate this and she automated that and, you know, made the patent better. Unfortunately, it wasn't until like 20 years later that the industry, you know, adopted this as a safety mechanism and then said like, yes, all automobiles should have this. This is important. But, you know, without Mary thinking out of the box, the common thinking at the time was like, this is perfectly acceptable situation for people to like, you know, like put half their bodies out of their car to like wipe their windshield wipers, right? So this sort of like thinking outside the box is important. Now let's talk about Dr. Alice Stewart. She was a fantastic doctor, a pioneer, and there's a book about her. The book is titled The Woman Who Knew Too Much. Now, she was a doctor in the 50s. There weren't a lot of women doctors. Now at her time, she was dealing with a very, very interesting problem. She found out that children were dying at an alarming rate because of leukemia and her own godchild had died. So she wanted to get to the bottom of this problem. And so when she found out, she found an alarming, alarming discovery that all the children that had died, the parents had access to proper health care. So there were well-off people who had access to health care. And then the common practice at the time was, as part of prenatal care, pregnant women were being ex-raid to check the wellness of the baby. So she found this and she published a study in 1956. And her study was again not accepted right away. And she had to prove more and more with more and more data until the 1970s where then you started seeing these big signs in front of like x-ray rooms saying like, hey, if you're pregnant, you should watch out and the tech start asking you, you know, are you pregnant? You shouldn't be being ex-raid. Again, like the common thinking at the doctors at the time was, x-ray was a cool new technology. They used x-rays to figure out problems. And they couldn't believe that this could cause a problem. And again, it took a person on the outside looking in to identify a problem and it was a very big problem. Now, let's talk about our wonderful Dr. Grace Hopper, she's a woman in tech, right? She invented the, or she coined the word debugging. We're all devs, so we understand what that is. And the common thinking at the time was computers were fantastic machines to perform calculations. And Dr. Grace Hopper challenged that idea. She felt like computers can do so much more, right? So she thought if we could write instructions in a human understandable language, then we as human beings can, you know, write cool new applications and then these set of instructions can then be converted into machine understandable language and then you have compilers. So she paved the way towards, you know, creating a whole new world, a whole new line of apps. And she said that the most dangerous phrase in the language is we've always done it this way. And that's the problem, right? So we are set in our ways. And when we're a group of the same type of people, we tend to look at it the same way. We don't look at problems differently. Now, we might think that like, hey, this is all in the past. Well, this is the 2000s. How many of you have watched the movie Concussion? Have anybody? Okay. So this, there's a movie called Concussion and it's based on Dr. Amalu's story. Now, Dr. Amalu is originally from Nigeria and he immigrated to the United States. He's a forensic pathologist. Now, what he did in 2002 was like he conducted an autopsy of a very famous football player, American football player called Mike Webster, who used to be very, very famous for the, for the Steelers. Now, Mike Webster is only 50 years old. And before he died, I mean, his life was in shambles. I mean, he went from being famous and, and somebody, you know, so awesome. And but towards the years that he died, he was literally on the streets and he was literally going crazy. But when he died and Dr. Amalu started digging into the reasons, he actually found a new disease. He named it CTE, chronic traumatic encephalopathy. Chronic meaning repeated. Traumatic means injuries, causes brain damage. I mean, if you look at it, it's common sense. You bash your head repeatedly, you're going to end up with brain damage. But when he published his study in the medical journal, I mean, NFO is a huge, huge organization. There's a lot of money involved, billions of dollars. Like an ad spot for Super Bowl runs like several million dollars, right? It's a huge industry. And it took the NFL until like 2009 to actually like put up some guidelines for concussion for their players. I mean, you've got like more than 80 people, ex NFL players that have been diagnosed with CTE and fought more than 5,000 NFL, ex NFL players have sued the league. And yet today, today, Roger Goodell, the commissioner of NFL still claims that football is safe. I mean, this, so, but the point here is Dr. Amalu didn't know anything about football until he actually performed the autopsy for of Mike Webster. And then he like dug deep and he was able to find a problem. And again, that whole looking, you know, at a different angle helps find new solutions. Now, going back to tech, right? 5.8%. Stack overflow in 2015 conducted a survey. And in that survey, they surveyed about 26,000 developers now from 157 odd countries. Out of that 26,000 people, only 5.8% of them were women. That's pretty sad. This wasn't always the case. If you look at the 1980s, almost 40% were like women in tech. But now it's like if you look at every major company, you're talking like less than 20%. And why is that? If you look at the 1980s, what happened was the personal computers started being marketed. And then the personal computers were targeted more for boys. I mean, you had ads like this, right? So you have here. Here's a personal computer. I mean, it even says, you know, because the sooner your child starts, the further he'll go. Like how much more can you target? And here's another one. Adam helps prepare kids for college and helps pay for it too. And again, here's a picture of a boy. Now, what happened was because the personal computers were marketed as toys and they were more for boys, you know, boys were playing with it or like kids were getting more involved with computers at an early age. I mean, boys. And when college started, even in introductory level courses, when women came, like a lot of the people that were in the class who were men knew a lot of the stuff. So when you're in a class and you kind of feel like you're the only one that doesn't know the stuff and you look around you and everybody else seems to know what they're doing, you kind of feel left out. And it's in, you kind of start to question yourself, maybe this isn't for me, right? So that sort of feeling sets in and a lot of women did leave the computer science class. Now, I want to share my own personal story. This was also in the 80s and my school had a started a computer center. This is a time when I'm originally from South India and now live in the U.S. South India temperatures can be like 30 degrees Celsius with 80% humidity. We didn't have air conditioners in classes. We just had ceiling fans. And when the computer center started, I mean, it was like, you know, it was like a clean room to go in there. You had to wear like little, like, you know, the booties, like to cover your feet even if you're wearing your socks. It was like a clean room. So it was so fascinating. So I walked in there and a teacher who teaches computer science can't like pull me in. And it's like she talked to me about different summer classes. And then so she wanted me to take up the summer classes for basic programming. I was like, oh my God, like this is a lot of money. It was, it was 300 rupees at the time, which is if you convert roughly now, it's like five or $6 U.S. dollars. But back in the day, it was a lot of money. And being from a middle class family, I wasn't sure if my family could afford it or not. But my parents were fully supportive of me doing the class and they signed me up. They signed me up. I took the class and I was the smallest person in the class. Everybody else in the class were grown ups or kids, college kids and stuff. I didn't know a lot of the stuff that was being taught. Everything was brand new. I was the odd person in the room. I was miserable because I was good. I felt good. I was a good student that gets like good grades. But when I went to that class, it was like, it was horrible because like, I know how to solve problems, but I had to write it down. I had to draw flowcharts and it was like, it was a part of the training that was like challenging your brain to like, think a certain way, think a different way. Right? I hadn't exercised my brain muscles that way. So I found it very difficult and I didn't do well in that class and I just felt the same way. It was like, this isn't for me. But then my teacher again pulled me out and said like, no, no, no. Try a different class and she put me in, she signed me up for more classes. So it worked out well for me. And when I took computer classes in high school, it was just so much easier because it was the same stuff that was being taught and I'd already knew how to think a certain way. Then it just started to fall in place. So for me, that was the big, big change and I had the right people at the time in my life to kind of push me in and give me that encouragement. But sadly, like, you know, the culture and the society isn't that way. We tend to associate women more to caregiving and family and men more to career and like, you know, providers. So the thing is, our brain is awesome. It's like a supercomputer. It's taking, it's making, you know, a million calculations a second. So many computations and it can do that because of the amount of information that has been provided. Now in computer science, there's like two big problems. If you're a developer naming things and cash invalidation. So the bias is like, it's like a cash. So your human brain like takes certain shortcuts because of all the prior knowledge of information. Now, if you've fed it, you know, hundreds and hundreds of years of stereotypes that's built in, sometimes we don't even notice some of the decisions that we make. We kind of tend to make them unconsciously. And that's the problem. Now talking about stereotypes, I Googled the word CEO. Google gave me a wonderful definition of who a CEO is. It's like the top person in the organization. And next to that is a picture of this white dude in his early 30s or late 30s. Can't tell. Then I hit the images tab. Right. I mean, for heaven's sakes, even the icon on the clip art is of a dude with a tie. And these are the types of stereotypes that we, you know, push into our heads. And we see images that reinforce the stereotypes. And this is all we, you know, were fed into. I mean, I'm sure if you do, if you Google programmer, this won't be far off. You'll see a lot more dudes than, you know, then you see other minorities or women. Now, I'm the last minute parent. Summer has started and schools ended. So I'm frantically trying to figure out, like, what are the summer camps? As a last minute parent, I did this. I Googled for summer available summer camps literally like a day before I flew into NDC. So when I Googled, this is from this, I'm sure you can't read it, but this is from the city of Temecula, Oregon. So these are all the summer camps that was listed. Now the summer camps that was listed was gear. I saw the gear to robots, which had like some really cool stuff like, you know, Lego bots and building, you know, all kinds of robots and things like that. It seemed really cool. And then I saw two more classes and it seemed to kind of target girls. There was a girls retreat camp. And if I were a princess camp and when I read the description, let me read it to you. So join us for this fun filled summer camp. Spend your mornings creating arts and crafts, decorating cookies, designing jewelry, painting and more. So this is what I guess we want our five to ten year old girls to know in summer. Whereas there are kids that are building bots and stuff and this is being fed to our girls. And if I were a princess camp, there's stuff like, you know, good manners, learn to play games and we'll work on our table manners. So again, it's like stuff that's being targeted again to the stereotype of women being caregivers and men being providers. And this kind of stuff starts very, very early. Our girls are more than just a princess. And if you look at this kind of stereotyping and bias, this is very, very prevalent in the toy industry. Now in the toy industry, I mean, you've got girl toys, you've got boy toys. I mean, if you walk in, there's a pink aisle full of like fluffy stuff and dolls and frills in like major toy stores. And then if you go to the boys aisle, again, you've got stuff like bots and building things and girls at a very, very early age start to associate this. My own little daughter, she's nine now. And when I went, when I took her to a toy store, trying to like, it's like, hey, how about this? How about this? She's like, no, mom. This is, that's a boy toy. I don't want that. Don't you get it? I don't want that. I'm like, wow, she's nine and she's already got this thing in her head that these are toys that she didn't want to be associated with. Now this is where like Debbie Sterling, she's a woman in sec, she wanted to challenge this bias in the toy industry and she started this company called Goldie Blocks. So what she found out was like, you know, how girls process information is different from how boys process information. Girls love stories. And so she built like toolkits for girls like where they can build stuff, you know, to challenge their special skills and stuff. And she did this and she has this really cool like, you know, storyline and character lines. There's Goldie Blocks, Ruby Rails, who's a programmer, of course. And so she's got this whole line. And when my daughter like picked up the Goldie Blocks set, it was kind of cool. This is like Goldie Blocks invention diary and it was like so, so cool. Like so she was like, like reading is like Goldie Blocks, the secret diary and she was like all into it and trying to build stuff. So with Debbie's permission, I want to play a short video for you. He Is a narrow wide night upon a fiery sea? Late at night a person I turn and I dream of what I see I'm an hero, I'm holding on for a hero till the end of the night He's gotta be strong and he's gotta be fast and he's gotta be fresh from the fight I'm an hero, I'm holding on for a hero till the end of the night I'm an hero, I'm holding on for a hero till the end of the night The End So stereotypes play a very very big role in how children see things and this kind of like starts at middle school age So on one end we have a pipeline problem, right? We feed these stereotypes and in the 80s you had revenge of the nerds, war games and all those kind of stuff They kind of like, you know, gave nerds a bad name and girls didn't want to be associated with being called a nerd and they kept away from computers because like that was like a nerdy thing to do So on one end you have the pipeline issue where all these stereotypes are pushing our girls away from tech and then on the other hand you've got the inclusiveness problem, right? We already have women in tech and they're leaving tech and this is because of various problems Let's talk about like how hiring, during the hiring process how that can affect Now Dr. Karine Maas, she's a social psychologist, she conducted a very simple study She gave a fictional resume, I mean it was an identical resume with just one change The name change, John versus Jennifer She passed that resume around and this was for a position of a lab manager She passed it around to a bunch of scientists, both men and women to kind of see the feedback that Jennifer got versus John People found that John to be more assertive or like they felt him to be more confident and competent But Jennifer sadly didn't fare that well Even women felt that Jennifer didn't have that much experience It's the same resume And they felt like some of the scientists even went on to say that Like you know I'm not willing to even mentor Jennifer because I think this is going to be a waste of my time I don't want to deal with this Sadly this is the same resume with just the name change And these are scientists, very, very objective people that look at data to make rational objective decisions And yet this happened and this is because of our existing cognitive biases Right, so what's in the name unfortunately everything Now Norway have to like thank you guys and you have such a wonderful policy when it comes to parental leave I mean it's amazing But sadly in like the rest of the world it's not the case Now when a woman gets pregnant It's kind of considered as like a resource issue Again when a man's about to have a child He's kind of like seen more as a, oh my god, he's getting into more responsibility So he gets promoted much easier On the other hand it's not the same story for women A lot of women don't even go up to their managers to even say they're pregnant because they know that if it's like the salary appraisal time They could get dinged because they're seen as resources that can't effectively work that time And this sort of happened for me when I was pregnant with my daughter I had been asking my manager for C-sharp work for a long time And I had like several years of C-sharp experience and I was like, hey, please, let me get this So that was a big project that came in the pipeline and there was a big feature and so I was assigned to it Then I was pregnant with my daughter, I was just two and a half months pregnant But I told my manager, I was like, hey, you know, I just want you to be aware so, you know, a few months from now you can prepare Immediately I was taken off that assignment and that task was assigned to somebody else I mean, come on people, I'm going to have a baby several months later, not the exact same instant So I can write code even when I'm pregnant, it's not like I'm losing stuff, right? So unfortunately, sadly the case in many, many companies and, you know, many, many companies And then Fortune Magazine published this study called the Hastings Study Women in STEM, women of color in STEM, like of all the subjects that they had interviewed, all faced bias 100% of them, there was not one woman of color that did not face bias They had faced some type of harassment or the other Now, Asian women are seen as very demure, calm or like, you know, less outspoken But Latinas and black women get labeled as the angry black woman Or it's very different Now when, like, we're having conversations, it's like, you know, it could be in Slack or HipChat Or we're having conversations and you could just be commenting on stuff And some of those comments could get perceived as like angry Like, how did you come to that conclusion from a GitHub comment or a Slack comment? How can you attach like this emotion, anger? Where does that come from? Right, so, but this tends to affect the whole line of conversation Because all of a sudden, you tend to lose all of that information the other person is trying to convey And now you're sort of associating this negative emotion to that And taking away all the good points that this person is trying to highlight Now, you've got stuff like, I look like an engineer, the Twitter hashtag where this woman, Isis, she's a full stack engineer for a company All she did was appear in her company's recruitment ad And that ad was like in San Francisco, in several places And there was a whole bunch of sexist comments which started the whole, I look like an engineer moment movement Where a lot of women posted pictures of, you know, what they do And it kind of highlighted, there are so many brilliant women out there doing some amazing things Right, while it highlighted that, it also highlights like all the sexist stuff that goes on today There's gamer gate, church gate, distractingly sexy, I mean, you could just keep on going with the Twitter tags Then of course, the interruptions Now when Kanye West goes to interrupt Taylor Swift, the whole world reacts like, how dare you Kanye? What were you thinking man? But this sort of stuff happens regularly in meetings on an everyday basis Now who's pulling out the Kanye card? Unfortunately, you know, people get talked over in meetings, interrupted This sort of stuff happens on a day to day basis So what are some of the things that you can do today to effect a change in your own organizations where you work? First of all, you have to speak up, because if you look at it and if you see a problem And if you don't highlight it within your organization, then how can we effect change? People, people effect change, you can't just stuff in a policy and say, hey, follow this policy, that's not going to work But when people start having meaningful conversations, that's when change happens People change people, conversations change people And then it can cause many effective policies to be created, which would be much more effective But we can't have the mindset, oh, this is not my problem I mean, when we see something, we have to speak up And also, challenging our own selves Now Harvard University came up with this test, it's called the implicit association test It tests your own biases, and there's several different tests that you can take regarding gender career IAT, skin tone test, and so on Now it's just a five to ten minute test Like I said, this is, if you go to the website, these are the several tests you can take And the skin tone test is also very interesting Because we tend to associate fairness to right and dark skin to, I don't know what I mean, you see Gandalf the Grey, he was Gandalf the Grey at the Wizard But it wasn't until he became Gandalf the White, he was like so awesome, right? He defeated Balrog and then became Gandalf the White Now, again, if you look at elves, like very fair people, and they were standing up for what was right Then you look on one side at the orcs, they were mean looking, dark, you know, ogre looking people Again, this sort of like stereotypes are built Unfortunately in the US, if you look, there was a study that was conducted Black boys, teachers tend to punish black boys more than other kids It could be for the same type of offense, talking in class Or the black boy would get more punished than white And this is not just white teachers, it's even black teachers that's in the classroom And this is again because of the bias, I mean, yesterday in Troy's keynote There was a gif that I found interesting The gif was like, he was talking about how, you know, you had like bendable computers And so one guy, the white dude, was working with a computer that was perfectly bendable Then you see the black guy that's trying to do the same thing to a Mac Again, you look at how like the black guy is being stereotyped in that way I mean, you don't notice it, but once you start thinking about these biases Then you start to pay attention and then you start to see more problems Now, the IAT itself is a very simple test In this test, the gender carrier IAT, all you're supposed to do is like You're asked to associate male names and female names to either carrier or family So the test itself is super simple, the first two things are just baseline You just have to press the E and I key like really fast as names pop up in the middle The first two slides, it's kind of like one wants to know that, hey, do you understand male names and female names? And then same thing with family and career, words that are associated with family and career So once this baseline is established, now comes the interesting part Like names up here in the middle, you're supposed to associate male to career and female to family Now, this is our normal bias, normal way of thinking, so we go fast Or as the names pop up, we press E or I Now comes a very interesting part where this is switch up Now as names come up, now you're asked to associate male specifically to family And women specifically to career And the test knows like how fast you went the previous test and how fast you go here Then what you find out after a bunch of more questions, you get the result This is my own result, this is me, the person that's talking about gender bias This is my result saying like I am moderately biased So this is something to be aware of and I'm aware of it and I'm working So the thing about bias is unconscious bias is like once you know that you are biased You can try to fix it and you can try to learn or do things that go against that bias to kind of like fix up the short cuts So there are several different things you can do One is you can take, there are several companies that offer bias workshops Maybe you can have one of those companies come and do bias workshops And again this is where like Dr. Corrine Maas, the social psychologist that conducted the hiring study She did another experiment where she went and to the set of scientists But this time she told them about the John versus Jennifer study And then they also attended a gender workshop, a bias workshop The results were vastly different These people became the biggest proponents of diversity in their company Because they understand what's going on, they understand the bias And so they are looking out for these extra things that they previously missed before And if that's not possible, you can just, the Harvard University study, that's available You can take it anytime So perhaps have a team IAT session and figure out, discuss the results, start a conversation And there's this wonderful book called Predictably Irrational It talks about like, you know, how our human behavior is and with a whole bunch of data from a whole bunch of experiments It talks about things like anchor bias where, you know, you're given three choices The first choice is like, not that much of features is kind of priced at one extreme The one in the middle is kind of nice, it's priced moderately And the third option is like, extremely priced high So you as a consumer, you're looking at the middle option go, hey, that's the best choice I'm going to get that, but that's exactly what the marketing guys want you to pick That's why it's placed in the middle So you have all these different biases and this book is wonderful It talks about like, all of those things Now, the thing about the biases also, it's like you have to learn to work against it There was a really cool experiment conducted by Dustin And there's a, I think a YouTube video, it's called the Backwards Bicycle What he did was, there was a bicycle and they made one change to it If you turn the handlebars to the right, the bike will go left So if it turns to the left, the bike will go right And that was one simple thing and he told the people that were trying to ride the bicycle Hey, these are the adjustments Now, can you ride the bicycle from this end of the stage to that end of the stage But you can't put your feet down And people try, it's so hilarious to watch people try And the thing is they know, this is how the bike works, they know, but still it's so hard It's so hard for the brain to make that switch So dealing with our biases is hard and it takes time But the only way to deal with it is to challenge ourselves constantly And to feed ourselves like more information And to go step outside our own norms Right? So the other thing is, like when you look at your screening process You know, how are you sourcing your candidates? Are you sourcing your candidates as like, hey, we have a hiring position So if you go friends of friends of friends, hey, this is a cool job I think you should apply or like then we tend to bring in people that we know People that we know are probably people who look like us, who are in our comfort circle Right? Again, then we end up with more people who think like us, who look like us And that's not how you build diversity And again, it's like sometimes you have to go outside of your comfort zone Try to look at the normal places that you normally wouldn't look for finding candidates And meeting moderators, they talked about interruptions and meetings Now you can be, anybody can be a meeting moderator When you see someone speak over Jennifer, some, Sam can speak up and say, hey, can Jennifer finish? Can she finish her thought? Because when you're being interrupted in a meeting, you're trying to get your idea across And when you can't, you're not being heard, that's not proper collaboration But when someone speaks up and says that like, hey, can she let her finish? And this could be even a guy on the team, can Sam finish? Right? That kind of puts the owners back on the person and they can finish their thought, get their idea across And that's wonderful So this sort of stuff works very well where I work in particular And this is where like one of our colleagues will say, hey, can, can John finish his sentence please? And so we, we all like, you know, call out each other when this happens So the thing about bias is that, you know, we're not bad people making bad judgments Sometimes because of the stuff that's been fed to us over a period of time, we tend to make these bad decisions But unfortunately, if you look at bias, bias is nothing but discrimination And discrimination is just plain wrong The reason I'm standing here today is because no matter what my daughter chooses, arts or tech And if she chooses tech, I want her to be in an environment where she is not discriminated upon Where she has the fullest extent of opportunities available to her And I know this is a very, very, very tough topic But I really appreciate your time And I would love to hear your stories, send them to me My email address is myfirstname.lastnameatme.com It's much easier if you have your program cards, you can email me Thank you so much for your time, I really, really appreciate it Do you have any questions? Thank you Thank you. Oh, sorry. Yeah, so there is, I mean, the interview process itself is interesting, right? Sometimes if you ask a question, like, you know, how would you rate your C-sharp skills on a scale of, like, say 1 to 10? I might give you a 7, but that doesn't mean my C-sharp skills are, like, you know, 7-ish And a guy might give you a 9, that doesn't mean that his C-sharp skills are, like, you know, who-who out of the roof, right? So sometimes, like, some of those questions that we ask are, you know, can make an impact And also how we process, like, for example, like some people do really well, like, in some interviews you're asked to go in and, like, draw stuff on the board or, like, you know, whiteboard a solution Some people may not be very comfortable with that, whereas if you give them a little bit of time, they are perfectly capable of giving you that solution So you got to think about, like, you know, if you want the best candidate, like, what are all the different avenues by which you can get that information out of the candidate So there's a lot of good articles, like, one of the stuff that I read was, like, Google's interview process And some of the questions that they ask in terms of, like, behavior type questions But the interesting thing is they ask the same type of questions to everyone So that when you get the results, you can, like, look at those results in a meaningful way and make an objective decision Any other questions? Thank you so much for your time Thank you
Research indicates that a more diverse group of people can solve problems better, come up with more creative solutions than a homogenous group. However, most industries are predominantly white male. Why? The hardest possible thing is to know what you don’t know. Understand your bias. Being aware of your hidden biases can help you to look at things differently, to unlock completely different perspectives. Understand the challenges that women face in today’s industries. See how you can help to be more inclusive and create a more open and diverse culture where you work.
10.5446/51853 (DOI)
חありがとうございました ק дорה תודה כלל כך very much coming here after the party tomorrow everybody is a bit bad I guess and today we are going to talk about unit testing and concurrent code right? that's what you can for if not but before that we are going to speak about my favorite topic תק Lage Conference myself. Stating C regulator Х Psalmemetאסט ministers, א rasa Es val But Als searching companies, especialized in unitesty and tools it called typing, You might have heard about it, not working anymore. So I feel free to advertise the product from time to time. I am not affiliated with the company in no way whatsoever. Other than that, I am also the evangelist for Auscode. We have a booth over there. You might have seen it. We have a raffle at half-pass eleven. If you haven't signed up yet, you should. And I have a blog, blog.rawhelper.com, where I write about anything I find interesting. Usually, coding-related, but not only. All of us live in a very interesting time. You don't look around you. In the past, and I know I've been in the past, we knew that when we write bad code that didn't perform as well as we wanted, all we had to do is wait for four, five months, and new Intel processor will come around and make our code run faster. Also known as the free lunch. And it's over. This is no longer happening. We can't rely on a new faster processor. Processors are not fast anymore. The only thing we can rely on is more and more cores inside our own machine. Even my father, who just browsers the Internet and write the occasional email, is using eight cores to do it simultaneously for one site. And because of that, a new language construct starts to appear. You know, I sink a weight, tasks. We're talking about all this cool thing of starting new threads in multiple new ways. Even C++ finally got the threading model a few years ago, because we need those in order to write good code. And new practices appear, like actor models. I've been to two sessions about actor models in this conference only. And there's another session about concurrency yesterday, and people talking about those things, because there's no way around it you have to use those, unless you don't want your application to run pretty well. And new languages, what I call it new wish languages, because they were there all along, like closure and Haskell and Erlang and functional programming in general, appear just to accommodate us when we want to write good code that is still multi-freaded or concurrent to asynchronous. And this is a very interesting time. Now, the problem is that the unit testing world, and I said as a unit testing consultant, hasn't moved that way, because when you go to a unit testing course, or you read a unit testing book, or you go to a unit testing blog, you see something like this, right? Usually the calculator, that's our low world in the unit testing world, because it's easy. Two inputs, one output out, no threads, no task sink, no concurrency whatsoever. We're in the middle of a revolution, things are changing around us, and a consultant teach you about this. And then you go on Monday to your work, where you're stressed out, don't have enough time to do anything, and this does not work. It doesn't work because your code doesn't look like this, right? The problem with concurrent code is it's difficult. It's exactly the code we want to test. Things are going around all over the place. We don't think in concurrent matter. A human being cannot multitask. People who says they're multitask, don't hold the phone on one hand, and write an email with the other. That doesn't work. What they do is divide the time in a very efficient way in order to semi-multitask, just like my single processor back in the old days. I do a bit of this and a bit of this and mess up both. In the past, I could put a break point in the beginning of a method and run using F10 from start to finish. Today it won't work, right? I'll get to a new sync on your thread or whatever, and the cursor will go amok to the other side of the screen, to another method. And I will be scratching my head thinking, how the hell did I got here? Open maybe the stack trace. See, they're completely different stack trace. It's not that easy. And threading doesn't look deterministic. In fact, it is deterministic because everything in computers is deterministic, essentially. The only problem is there's time involved. And when time is involved, things look as if they are semi-random. And so this code is difficult code to write correctly. And in the same time, looks as if it's impossible to test. The third problem is that when we look what a good unit test is, it's almost the opposite of concurrent code. Almost the opposite of concurrent code because we want a consistent, a test I can trust that when it fails, it means I have a bug. That's what I want. That's why I write them. The only reason I write a unit test is for it to fail someday. Otherwise, I wouldn't bother because once a unit test fails, I know I have a bug. Now, with concord code, that's not always the case because concord code sometimes doesn't do exactly what I want in the same order, the same way, or the same amount of time as the previous time I ran it. And the test should be maintainable. It should be easy to refactor, change unit test as your code advance, change as a product manager bringing new requirements, completely different from the requirement it brought you yesterday. And it's very hard to do in concord code because once we start testing concord code, we tend to do those little tricks that only work for that specific scenario. And when that scenario changes a bit, the test will fail and I'll be frustrated because there's no bug there, just a code, some inner walkings that change a bit. And finally, a unit test should be readable. As far as I'm concerned, what I usually teach, the most important thing about unit test, you should be able to in a glance to understand what it does because we don't have time to mess with those testing things. We want to fix them and understand what they felt as quickly as possible. And once you go into this all multi-freading as synchronous world, readability tends to fly out the window. So, when developers, and I'm talking talent and developers, not junior developers, see concord code, they usually tend to write concurrent tests. They usually tend to take the mess they have in the production code and transform it magically into the test. And then, usually what happens, you find one of those, as I call them, test smells. We've ever heard about code smells. Oh, good. So, test smells is essentially the same for a test. The idea is that I have a bigger problem somewhere, but I know about those indication that I'm going to have an even bigger problem in the near future. One of them is the inconsistent results. You have a test that passes, then fails, then passes again. Or maybe it only fails on Mondays. We had one test that failed every 50 days. Well, good luck with those. And when you have a test that passes and then fail, you know what the immediate solution that your average garden-variety developer will use? It's made the vision of run again. Run again. And then it will pass. Yeah, so, woohoo. Yeah, but unfortunately, that test has no meaning whatsoever because it didn't find a bug. I'll just run it again, right? And when I run the test again and get used to that test, failing from time to time, it has no meaning whatsoever. And I should delete it immediately. The reason I should delete those kind of tests immediately, or fix, if you prefer, is because it will, after a very little time, it will affect the rest of your testing suite. People will get used to your build server being read from time to time. And then they'll get used to it being read all of the time. Your build server will keep failing all the tests and nobody will care. And in that moment, you should delete all your tests. They're just wasting your CPU. No one will ever fix them. So we don't want those sometimes pass, sometimes fail tests. Another problem that is very hard to solve is when I have one test over here, runs along, finish successfully. But unfortunately, between start to finish, you started a new thread, or go into a synchronous method, or something happened inside the production code. You are not even aware it happened. And then another test comes along, start running, and that thread comes back and crush it. And those problems are very difficult to tackle and to fix because you have no idea what caused it. You look at the stock trace because we get exceptions in.NET. And it has nothing to do with your test. Usually, it doesn't got nothing to do with anything because it's a new thread that stock is completely empty most of the time. And you run the test that failed, and nothing happens. It passes. The test is okay. It's another test that caused it. And then you get to play binary search with your tests. You run all the tests and you see if it fails. If so, okay, good for you. That's the first step. Now you run only half of them. If they pass, you run the other half. If they fail, you start slowly trying to zoom in into the right test that caused the new thread that crashed. Another test completely. And that's a very hard thing to do because some of the test owners will mess with you and change the order in which the test run from each and every one you use. And it could take hours or even days to find that test. And until then, you can't continue working because you don't have full confidence in what you're doing. Another problem I tend to see is long running test. A unit test, as they say, should run less than a second. Not completely true. It could run longer. But when they do run for more than one second, well, it gets me suspicious. Either I'm calling something outside of my nice clean unit testing world or I have a few threads there and someone is waiting for them. And when I see a test that runs for a long, long time, then I start diving into the code and look for new thread, new tasks, test start, one of those, I sync whatever and join or someone waiting for the results. This essentially means that I have a thread there. And finally, test freeze. Test freeze also known as deadlock. I have a test and the condition were right and the stars aligned the right way and the test froze. Usually, it doesn't happen on my machine. It will happen in the build server, which is so much more fun because I'll commit my changes. I have a continuous integration build and everything's okay. After four or five hours, I'll go and look what the state of the build is and I'll see it's been running since because one test froze and not all build servers will let you know which test it was and that's the hard part, finding out who did the problem because all of those problems tend to go away once you start debugging them. Now consider the next method. This is a very simple method. I took away all the business logic and everything just left the core essence and basically taking this method as it is and testing it is not easy. There are two things here that hinder my ability to test them. First of all, infinite loop. You can't unitest infinite loop. That's one of our problems. Second of all, I have a slip there. You see the slip in the beginning? That guy. Now, a slip in a test rule it's never a good thing because what I essentially want, I want to get over with that slip and then I have the logic and I want to test it. And your experience developer when faced with this specific scenario for the first time will probably write a test that looks something like this. And I saw this test been written in one form or the other many, many times. The idea is basically my code slips for one second. Why don't I use slip for two seconds? And then I can semi guarantee that one as I get to the assert in the end everything just ran. Unfortunately, that's not the way C-shop.net or even Windows works. The slip is not that precise and you can even get to places in which that specific slip, the two seconds, will happen before the one seconds. You need to have a very pretty loaded CPU but it happens in your build and it will happen in your build as well. And that's my fifth smell, slip inside your test. You should never write slip inside your test. Ideally don't write them in your code as well but let's put it aside for now. Once I put slip inside my test, my test is now time based. A time based test is not, first of all takes a lot of time. Two seconds is a lot of time. If I have 30 such tests, it will run for a minute. If I have 100 of them, well, if I have 1000 of them, no one will run your tests. Second of all, that test will become the test that passes and then fail because you use time based and time based is inconsistent by definition. And once it does fail, you have no idea why because all that happened here, I have two slips but I have no idea whether or not the failure came out of some business logic problem or from an actual synchronization high load on your CPU problem. And the hard problem about those slips is not discovering them. Discovering them is pretty easy, as you see, search for slip. The problem is convincing people not to use them. And then I go to the developer and I tell him, well, you shouldn't use slip in your test. This is why. And he'll start to explain why it is a good idea because we do love our code. I like my code and I'll defend them. I'll defend my code when I believe my code is right and I believe every developer is the same way. And I'll explain that, well, two seconds seems like a lot but we only have ten tests like that and I'm willing to wait the time. Well, you do, but what about the rest of your team? And well, it is consistent. Don't worry, I'll replace it. Instead of two seconds, let's wait a minute, which is horrible for your new test. But I don't argue that much because I know one truth and one truth only. It is that in concurrency, when something can go wrong, it will go wrong. That's it. It's that simple. We run a test so many times, if there's a slight possibility of something not working, it will not work. Usually, either five minutes before you go to your weekend or an hour before you have to release a version to your customer. This is a good book, by the way. Seven concurrency models in seven weeks and I especially like this quote from there. So I don't argue too much and in fact, and in fact, I had a similar problem and I explained that developer, they shouldn't use sleep. I told him it takes too long and it's not a good idea and whatever and he believed me. I told him, don't you sleep in your code? Change your code so there's no sleep in your code. Make it more deterministic. And he did. He wrote this code. And to this day, and you can ask him, he's around. He says I'll told him to do that. And all of us know that it's exactly the same thing. The solution for this particular problem, for this particular code is don't use any concurrency whatsoever. If you can avoid concurrency, please do. If something gives me a lot of pain, I shouldn't do it. And so, there's a pattern, not mine, it's out of the X unit test pattern book. Very, very big book, but you can also see the pattern on the net, which is called the humble object. Actually, it was inventive in order to test objects that are hard to initialize and create. But it can also be used for unit testing. All you have to do is extract your logic, put it in another method, in another class, and to tell the truth, if you go according to clean code practices, you should do that either way. And then, I can test it easily. For example, if I'll take this piece of code, all I need to do is to extract it into a new class called message handler, and I apologize for the name. And then, I can test this class, because it's very easy to test a single class without any concurrency unit test, test, thread in, task, whatever inside. And that way, I avoid completely the fact that I have thread in my code. Who will think I'm cheating when I do that? No one? Well, usually, yeah? Somebody's honest. Usually, someone raises their hand and says, no, you told us we're going to test concurrent code. This is not testing concurrent code, then you are right, in fact. I'm not. Because if I can just test the business logic, and that's okay by me, because I know the code, and I know what I want to test, and what I need to test, I should do that. But that's not the end of the story. There are other patterns. Another thing that can happen is that the concurrent part is in the middle, not around our code. For example, here I have a manager that queue a message into this queue. The queue is for asynchronous events so that many clients can somehow connect to it, subscribing some form on the other, and get the messages out. And the client gets the information from the other side. And I've been with a client that had exactly this scenario. And I gave him the solution I'm going to show you. He was very angry with us because it was back in the type mock days. And he said, okay, just because your tool doesn't do that, no reason for me to change how I'm testing things. Well, that's true, but that's not a tool in thin. Essentially, what I have here is two different tests, because I have two different objects, two different pieces of logic. The first one is the manager that needs to do some calculation and put some information inside the queue, and then the test ends. And the other one is the client that's supposed to get some information out of it and do something with it and go on. And so I can test, as I said, test before, test after. Just split it into two different tests. And by splitting this logic into two different tests, I'm able to test them without the concurrent part. What usually will happen, I will either fake the message queue, the one with the freddin asynchronous, whatever, or if it's written in a specific way and it won't start automatically, I need to call a start in it or whatever, then I can use the real one, like here. And then I'll create the manager, give it the queue, put a new message, and check that the message in fact arrived after some processing, I guess. This is a simpler solution. On the other way, I'll take the client, again, create a queue and make belief as if that queue told him that he has a new message and test it individually in an isolated matter because another thing about good unit tests this should be isolated from outside interference. We need to take a specific scenario, we have a full control on that scenario and we need to run it. And that's exactly what I did here. And again, might feel like I'm cheating, but don't worry, we'll get there. So those patterns, I call them the avoid concurrency patterns because that's what I'm doing. I'm avoiding completely the problem because I don't care about that. I don't care whether or not the solution runs in a thread, in five threads, or in just one of them. And in essence, actor model works the same way. Just write the method correctly and we'll run it with million threads. And if that's my solution, I can test it in that way. It's the best solution if I can get away with it. Getting away with it means that A, I can refactor the code or use the code as it is, easily. I don't want too many changes to a code without any unit test on it. And secondly, if it's logical to do it. Because sometimes I need more than that. I do need to do something with the multi-threaded logic. Let's talk about timers. I hate timers. I really do. The problem with timers is that some developers don't really understand what happens here. And they use timers instead of threads. And they think that they got away with all the race condition deadlock, but essentially it just made the case worse. Because what a timer will do, this is timer's timer. You know, in.NET, we are four or five kind of timers, I think right now. But I'm talking about timer's timer and threading timer, which essentially this one calls the other one. Wraps it around with a nice event. And the problem with timer, that essentially when the time elaps, it will queue a new thread in the thread pool. And when it does that, it doesn't care whether or not you're running a different elapsed event from before. So if you have like here some timer, and that timer will elapse every one second, but the time it takes to the process is bigger than one second. Some of them, not all of them, because it's not that precise. It can be one second or more. If your system is crowded enough, you might get more than that. And then you run out of threads in your thread pool as well. So every one second, it will start a new thread. And suddenly you get all those bunch of cool problems of race conditions and deadlocks in a code that essentially you don't understand how it got race conditions and deadlocks. Because this guy will spawn new threads all the time. Until he runs out of threads in the thread pool. And then other things start working. Everybody else at once sees the thread pool has no thread to use. And so this is the reason I had timers. The second reason is that I really want to test it. And the timer doesn't help me a lot. First of all, it's hard to make it elapse on the right time. Second of all, once I start a test with time, my new tests are pretty fast. So they run and they go down but he'll manage to throw a few elapsed event there. Some of them will happen just before the end of the test. Just to come back and bite you with another test. So that's why I don't like those, especially not in unit tests. They are very valid scenarios of using timers. But I see it abuse almost daily. And so a smart, bright developer, and I'm not saying it in a cynical way. I'm really talking about talented guys. What he will do is use dependency injection to inject a timer from outside and set it to as little number as he can, like one. Sometimes zero is also a valid number. Also works when you have time out. When you want to check for time out, the same pattern will emerge. That's essentially an anti-pattern. A talented developer can do without the slip down here. He'll find a way. And essentially what we created is a race between the timer that showed the laps and the assert in the end of the test. Whichever come first. And that's a problem. That is the last pattern, the last smell, in fact. Setting a timer to a very, very, very low interval and then trying for that interval to happen before I get to the end of the test. By using slip or just hoping that the time of processing will be longer than that. So again, we have a time best solution which is not a good thing. Secondly, it's very hard to investigate the failure. We're not sure whether or not the timer happened before I got to the assert. No one tells me that it really happened. And sometimes the failure will happen because it didn't happen. And I just get assertion failure. Well, I don't know if the message was sent, not sent or whatever. I have an actual bug. And if I use flat slip, then, you know, as I say, slide number 12. So the solution is very simple in this case. If I have someone who spawns a bunch of threads or start a sequence course, I'll just get rid of it by using a fake object or also known as a mock. I don't like calling them moks, but a fake object. And here I'm using type mock isolator to create a fake timer. You know, isolate fake instance of a timer. Now, type mock isolator, if you don't know your dot net mocking frameworks, there are two kinds in dot net. There are those who use inheritance. A fake it easy, mock, you know, rhino moks. They use inheritance in order to change the behavior of the object. While there are others who use the profiler API to go underneath your code and change it while it runs, while it actually jitted. Type mok is one of them. That's why I can do that with timer because inheritance timer won't help you. But there are the solutions, don't worry. I'm not saying you should use this tool specifically. You can ask me afterwards. I'll say exactly what I think about it. Or you can see, I had a session in DC about dot net mocking framework in which I feel really enough, I told exactly what I think about all the mocking framework, which one is better than the other. But that's not for now. So, I've created a fake timer, which enabled me to do that. Now, this is a very ugly piece of API and I can say that because I participated in designing it. And there's no real good solution in dot net. I used to work for type mok when we did that. There's no good solution for dot net. That's because events is like the step child of everything else. It's not exactly a method, it's not exactly a class, it's not an object. So, the only way we could find out what event the user is speaking about is to do this trick in which we almost put in a handler to that event just to grab it and throw it away. Essentially, you don't need to understand completely what happens in that line. All you have to know is that once I call that line, the timer will immediately throw the elapsed event. The elapsed event will be raised in this point. Not time based anymore. Once I pass this point, I know that event happened. And then I can go to the assert and check whatever I want to. Unfortunately, not possible for every single class out there. It depends on which mok infamok we are using. As I said, some of them using inheritance and not all objects automatically can be inherited. And not all methods in dot net automatically virtual. Actually, in Java, it's easier because everything is virtual. And some programming languages won't allow you to do that. Other than that, in dot net, there's MS Corelib. That's a special DLL with all the cool stuff, all string and reg X and threads. And we can fake those using typeMok or telegjustMok unless they decide to fake a specific class that's internal, never mind about that. And so a thread pool, for example, is unfakeable. You can't fake a thread pool. And it's a static method. So those of you who have used the inheritance-based tools can use that as well. But the solution is very simple. And anyone who ever used mok infamok, anyone here uses mok infamok? Well, you should. That at least once. Because everybody started, they would mok and now would fake it easy. And then you get to file dot open, for example. And you wrap it with your own class so you'll be able to fake it later on. So that's what I do here. I'll wrap the thread pool with the thread pool wrapper, excuse me, about the name again. And all that it does, just call the real class. That's it. And then I'll go throughout my production code and change every single place in which I call the real thread pool or whatever I'm doing, and change it to my new class. Now, essentially, I didn't do much, but I just enabled myself, I just gave myself the ability to fake that class. I don't really like that solution, to be honest. But sometimes we live in a very imperfect world. Sometimes we do things we don't really like. It's the least worst solution I could find. But once I did do that, and by the way, they term now same example completely. I don't want that in the test. So I'll find some way, sometimes service or wrap it with my own class so I'll be able to decide what time is it in my test. I don't want anything to be time-based in any way. And so I'll change that to my thread pool wrapper, and then I can write a test. Now, let's talk about what we have here. Basically, what the first line does is I'll tell it that when it's called, and this is the method I care about, instead, I should do this ugly thing, which essentially says, let's take the first parameter, which if you remember is the callback I want to call, and run it immediately. So when someone in my test, call thread pool dot q new walk item, it will immediately run that method. So I again took something that spawn new thread and made him run in my own thread in my test. And that's why I know that running thread, the method from before, will not run in thread, in fact. It will run in the same thread as the unit test. And that's why I can assert later on. And that's all nice. There is one problem with the multi-finding code, a synchronous code. You can never tell when something does not happen, ever. If I have a solution that's in case I have validation error, don't call Amazon, for example, and it happens because we wrote a good code, it happens in another thread, because we don't want the UI to be stuck. There's no way, there's no amount of time I can wait in order to conclusively say, this never happens. If I have something that's starting another thread, running its own world, how much time can I wait for it? One second, one minute, one hour, one day, one week, one year? It will never tell me that it did not happen. So in this case, what I usually do is I build a situation in which my code, the real code, will run with the asynchronous or threads or whatever, while in test, I will only run as a single thread as a solution. And that way I can guarantee that something did not happen because it ran in the same thread. And for example, let's talk about this thing over here. Right here, there you go. I have this. I have a class to test. Now, this class gets a message bus, some layer, whatever, and starts a new task. Everybody can see that in the back. And here it does some work, which I do care about. But due to the way the code is written, I can't use the humble object pattern. I can't extract it out for some particular reason. And instead, I have a real task running here. And I do want to know whether or not after everything is set and done, in fact, the event it in wasn't raised. So I want to run it in my own thread. Now, the cool thing about tasks is that, in fact, I can use a different task scheduler. In this case, I'm using the current task scheduler. Usually it's the default one. You don't have to write that one. I did it because of something you see in a minute, either default or current or whatever. Now, what's cool about tasks, for example, is that I can create my own task scheduler. This is my current thread task scheduler. And you can Google and find all sorts of the examples how to write it. And it's not that hard. It's like four lines long. Basically, what it does, that means someone queues a new task, it will execute it immediately. Okay? So I messed up this task scheduler in order to run in the same thread. That's what's happening here. But I still have a problem. I need to somehow change the scheduler within my production code down there from the test up here. And so, if you look at the test, what I am doing, which looked like a complete nonsense in the beginning, is in the test, I'll start a new task, which seems like it goes against everything I've been talking about so far, right? But this new task will use the new current thread task scheduler to run. So this line has no meaning whatsoever. Only thing that it really does is that it changes the current task scheduler that my code uses to use this task scheduler instead. And so, I affected the code inside to run in the same thread. Okay? From outside. Because that's usually what we do with the new test. We want to affect the code from outside. We don't want to go and, you know, tell a bunch of things out. And we want to run the code as realistically as possible. And usually, I don't want to do too many refactoring unless I have unit tests, and that's the real balance. No refactoring for unit tests and unit tests before refactoring. So that way, I can change that easily. There are other ways to do that. And you probably know all of them. We can use dependency injection. We can use dependency injection to inject sometimes a class that starts new threads and in the test, it will inject a fake object, a mock, or a class that does not start new threads, right? We can use if-depths or if in those stars, not stars, how we call them. Hashtags before. In order for, during compilation, when I compile for tests to run one specific branch of the code, and with, I'm not testing the other, although I don't like that solution because I want to run my code and compile my code all the time. We can pass a delegate. Lambda's and delegates are a good way to play with concurrency. The method getDelegate.executeSomething. And during tests, the execute something runs in the same thread and during production code, it starts a new thread or call a waitersink or whatever. And so on and so on. And I'm sure that all of you can think of other ideas on how during the testing, you'll run one way and during production, you'll run the other way. And that way, I can check, especially for the scenarios in which something did not happen. And those three things I just showed, you are called them the single thread patterns, the fake and sync, where I fake the one that create the new thread, so it won't create threads, run everything in the same thread as the test. And the thing that, and the second one, which I'm not good with names, I'll run the code in an asynchronous method when I'm in production, but as a synchronous way when I'm in tests. So those are the second kind of patterns. But that's not the whole story because sometimes I do want to run the code as close to how it runs during production. And I guess that when you came to this session, that's what you thought I'm going to talk about, more or less. And sometimes I can change my code or I don't want to change my code in order to run it that way. And there are solutions for that. And the amazing stuff about them is that I know several consultants that came to the same conclusion individually regarding those kind of patterns. Because we always want to run the codes as close as we can to how it runs during production. The first one, the signal patterned. I'm not sure if it has a proper name. That's my name for it. And the idea is very simple. I'll run something and then make my test wait until some operation happens and then I'll continue executing the test. This is what it usually looks. If I have a class that has a very complex time-consuming calculation, as you can tell, I run it in a single and different thread or I use a Nasekernis method or whatever in order to not block everyone else. The lucky thing I have here is that after the calculation is over, I call a different dependency, some other class outside of my code. And this is a hook I can use. I can use that one, basically, and create a fake object of that class. And when I tell them that when this class, this method is called, just set a wait-end. Everybody here know the auto-reset, manual-reset, counting wait-end, right? We have them for ages. Those are great things to use in unit tests. Because what I'll do is I'll wait until it's set. I'll start execution and go down and wait till it's set. I usually use manual-reset event or counting if I have several things I want to wait for. And if I can hook into my code in a place in which I can guarantee that the problem, the scenario I'm testing just was just over. It was concluded. Then that's what I'll do in those cases. And then I'll wait. Now, it's very important to use time-out in those cases. Otherwise, your test might be froze forever. So I always want to use time-out. And you do some other thing that usually unit testing consultant tell you not to do, you have to assert here. Because I need to assert. One of them assert that the event, the things I'm waiting for, actually happens. If it doesn't, that's the true. It was called. And this is about the only place in the world that I will use the string with the assert to explain what didn't happen. Because it's not very easy to understand how come there was called, was not called. So I got something for myself a month, a year from now, when the test will fail. So I'll be able to remember what's the scenario that didn't happen right now. That's why I use that. And then I get to check whatever I wanted to check. And this is essentially a signal. I'm waiting for a signal that the calculation was finished. And then I continue execution. Another thing I can do, and this was discovered by a colleague of mine. He had a bunch of multi-freted things to test, and he could have refactored them, but he chose to test them the same way. And he created a new type of assert. Instead of immediately asserting for a problem, he takes an action that is essentially the assertion and run it until sometime out pass. This is the busy assert. Basically, that's what it looks like. I'll wait until the result equals 5. And by the way, you have to use a methodology, a lambda, because if you just write result, it will be immediately evaluated, and then will always be false. At least most of the time it will be false because the thing haven't happened yet. So you need to use lambda because you wanted to be reevaluated every time it's pulled for the result. This is this part. And I tell him as long as I don't, as it's not equal 5, you have 50 rich tries, try pull every 100 milliseconds, and it will go and try, try, try, try to get the result. And amazingly enough, I did this talk a while back here, last year I think, and someone told me, you know that N unit has the same capabilities, right? And N unit does have the same capabilities. Anyone here use N unit? Yeah, good for you. Like N unit. Here we go. That's N unit way of using a busy assert. Same example. If you use the assert.dat and you have to use an action here, if you wouldn't use an action, and I fell for that one many, many times, if you just write like so, if you just write this, it will always fail, because immediately as it runs, takes the results, not 5, it's 0, and it will pull for 10 seconds, minutes, even however you want, it will always be 5 because it's evaluated immediately. You want something like this, so every time it checks, you get the current value. And since we are talking about multi-fading things in the beginning, someone, well, I know what I did, when you forget about that, and we tend to do that because we are very stressed people, we have time limits on anything we do. And I forgot that, and I thought that I didn't get the result because I had a rest condition or some other weird theme, but in fact, I just forgot to use an action. So basically, I tell him that, check for the result, equal 5, and it should be equal 5 after 10 seconds or less, and there always is the important thing about here because it's not like using slip, and pull every 15 milliseconds. Why did I say all this? Because basically you can look at it and say, well, that's exactly like using slip, but it's not like using slip because once the value is the right value, the test will end, and usually it takes less than the test in seconds, that's at least what I hope. Well, slip, you'll wait all the time. When the test does fail due to timeout, then you wait the whole time. So the test can take longer, but only when I have a failure, and that's okay because either the test run fast or I just found the bug. So that's okay with me that it run faster, longer. The problem with those two is that once I get the failure, I go back to the world in which I'm not sure whether or not the failure was due to business logic or some threading thing. But at least I reduced the problem a bit, but I need to investigate to show I need to start the debugger, use login or whatever, and login is a dirty word when talking about unit tests because you shouldn't use them, you should understand the test, et cetera, et cetera. That's true, but let's be pragmatic. We do want to fix the problem. The test will run for too long, as I said, but only on failures, and that's okay. And usually I will use this pattern, and I use them almost all the time because the alcohol become more and more multiflated and asynchronous, but I only use them if the other two doesn't work. Now finally, let's talk about testing for concurrency issues. Before we start, I have to say two things. First of all, I was a part of a project in which we wanted to deterministically find deadlocks and race condition in your code, back then in TypeMoc, actually, and it didn't work. This is a very hard problem to solve, or maybe even impossible, mathematically impossible, that I know for true, and so the project's still there, but I don't think it worked. It's in a working state. So there's no tool that you can run on your code. Well, there's some tools that will find a deadlock once it happened, and some that will run a bunch of scenarios and try to induce a deadlock, but not many that will guarantee that you do or don't have a deadlock, or race condition, which is even a harder problem to solve in your code. What I usually use, the patterns I'm going to show you is once I find a bug in my code, I will hack together a test to make sure that the bug doesn't return. For example, sometimes we had a problem with the client. We wrote our own TCP client. That's not a good thing to do. You don't want to go there. You have all the.NET framework and NuGet and GitHub for people who actually know what they're doing, but we did. And quickly we discovered that there's a problem with the client. It wasn't as synchronous, which means we used to send a message, wait for the next message to come. Everything would freeze in the middle. And that's not a good thing to do. So we started writing unit tests to make sure that all of our tests didn't wait. When you run a method, it just send and then come back. That's what we wanted to do. And the way to check for that thing is basically you use a fake object again. And once it is called, that depends on the client's send, whatever, you have a wait handle, waiting almost indefinitely in order to make that first call stuck. And then you send another call. And if the second call manages to go through, it means that everything is okay and you're not running the same thread. So it looks something like this. This is something we used. Here you go. Let me just... Basically, the first thing, and in both, when testing for concurrent issues, you must use time out. You must use those because those tests will get stuck from time to time. That's our purpose of those tests, to get stuck. For example, here, I'll tell him that when a fake client's send is called, I want you to wait on a wait handle, indefinitely, essentially. Well, not indefinitely, but almost. And I have another wait handle just to make sure that he has finished walking. That's just for cleaning up later on. And then I do something else, never mind. And then I just call send email twice. One after the other. The first one got stuck. The second one should go through. Or get stuck as well. I don't kill. As long as I manage to get to this point, it means that they are not running in my thread. They came back immediately. No matter what happened later on. So if I got here, I can release both of the clients and check that they're finished running because I don't want threads out there just waiting for the next test to crash it. And then I can assert that everything's okay. Essentially, that's what I'm doing. That's the way we found that worked for us to test that something is actually asynchronous. Because what happened, one developer will find the bug, fix it. The second developer will come the next day and mess everything up. That's what usually happened there until we put those in place. Another thing we can test for is deadlock. And that is a very fragile test to have. I usually won't use it all the time. Just when I have a really crucial bug that I don't want to ever see again in deadlocks. By the way, some people test for deadlock by raising 1,150 threads up and making them run against each other. Well, sorry. Well, I've been there. Everything here I did. So don't worry and don't feel bad if you did that as well. The problem with those kind of tests that once they do fail and that's what we care about in tests, that they fail, I have no clue what to do with it because I had 100 threads and something bad happened. So I know that something bad happened. That's nice indication. No idea what to do with that information because everything just ran around over the place. No way to know what to do with that kind of information. Sometimes notification is enough but sometimes I don't want to fix the bug afterwards. In this case, I am more precise. What I do is I take two threads, I'll spawn them and we created a method called running different thread. Essentially what it does, it does create a new thread and then clean up in the test clean up method. We check that the thread haven't fallen any exceptions. That's all that method does. All the examples are in GitHub and you can play with them and see what happens there. Basically what we do is we have, first of all, timeout again. Secondly, we make sure that when some external dependency somewhere is called, I'll freeze the thread. The reason I'm freezing the thread is because I want to make sure, as much as I can, we are talking about threads here, that both threads are just immediately before the deadlock. I'll just hold them before that. That's what I'm doing here. I say, when you call that dependency, just wait. And I run, I'll run both methods in different thread. That's that. And then I'll say go, run against each other. And if I'll get a timeout in the test, it means that I do have a deadlock. Although it's not that precise. From time to time it will pass then fail. But once it does fail, no, I have a problem. And it's a very fragile test to have. I wrote it, I think, two times throughout my career. When we had a crucial part of the application, have deadlocks all the time, we wanted to make sure it doesn't happen. Wrote a bunch of tests like that, that hold the threads and then make them run once again, one against each other. So those were the patterns. First of all, they avoid concurrency patterns. They humble object, extract the logic and test it. And before and after, just take the bunch before the concurrency and after and test them individually. Run in a single thread, which is fake and sync, fake the timer, the thread pool, the whatever, start a new thread, and make them run whenever you choose. And plain wood, the code, so in my test it won't run in a different thread. And this is going to test. Wait for a signal or have a busy assertion. Very easy to write this busy assertion. Essentially it's well true. And you run a bunch of time until that happens all the time passes. And I showed you two ways to test for concurrency problems, although in this world we need some research. No one knows how to do that right. And the order in which I apply those patterns is exactly the order in which I showed them. First of all, when I can, I'll avoid the concurrent part. If I can, if it's logical to do that, and you're a talented developer, so you get to choose, you decide whether or not that's good enough. And if I manage to avoid concurrency good for me, if not, I'll go to the next level of trying to run everything, but in a single thread. If I manage to do that, good for me again. And if not, I'll use synchronization, either a busy assert or a signal pattern. This is the way to do it. That's it. Thank you very much. I'll be taking questions. You can find all of the examples I showed, including the one inside the presentation in GitHub. My name is Dror Helper. So if you go to GitHub, yes, Helper, and if you go to GitHub, under the helper, you'll find those. If you go to Slide Share under the helper, you'll find the slides. And feel free to ask any question, either now or afterwards, or come to the booth and ask me there, whichever you decide. So if you have any question, anything? Yes? What about integration test? Well, integration test does, you ask what about integration test? I was talking only on unit test, and you are right. And integration test, essentially, I allow myself to do more in integration test. Run the all concurrent part, and usually busy assert is your friend there. And since integration test, you should have less integration test than unit test. And because of that reason, I allow myself to do more of those things that I told you not to do, essentially in the beginning, because I have only 10, 200 integration tests. They can run longer. I can put them in a different build. And integration test, when they fail, I need to investigate them either way. So that's more or less okay. Although, again, when I can use one of those pattern, I will use them even in integration test. Okay? Well, thank you very much.
Getting started with unit testing is not hard, the only problem is that most programs are more than a simple calculator with two parameters and a returns value that is easy to verify. Writing unit tests for multi-threaded code is harder still. Over the years I discovered useful patterns that helped me to test multi-threaded and asynchronous code and enabled the creation of deterministic, simple and robust unit tests. Come learn how to test code that uses concurrency and parallelism – so that the excuses of not writing unit tests for such code would become as obsolete as a single core processors.
10.5446/51859 (DOI)
We're good to go? Excellent. Hi. Hello. I can see about four people. I've been giving tours of the stage before we started, so it was like to be a speaker at this stage. Right now I can see about this row of people, all the rest I just hear noises, so I'm assuming it's all voices in my head at this stage. Welcome to Website Fuzziness. This is a continuation of a lot of our security thread talks that we've been doing today. I hope you've managed to get to see some of the other security talks around, like Stevens and Chris's. All quite good ones today. This is about how to attack websites and break them and use very offensive tools to help you find holes in yours. So a little bit about me. My name is Niall Merrigan. I am a managing consultant with Capgemini in Stavanger. I'm also an MVP on the developer tools, Azure Insider, Azure Advisor, ASB Insider, and prolific whiskey drinker. As I was found out last night when I went into the Dubliner and I'm absolutely terrified when my wife looks at our credit card bill and she goes, Niall, what did you do? Nothing? I swear. If you want to tweet me, you'll get me at nmerrigan. You'll also find me at niall.merrigan.no. Please visit my company's website because they are very generous in letting me out the door every so often to talk to you. You can also visit my website which will have all the links and bits from this talk about an hour after this. This is my builders versus breakers talk. Builders, all you nice people sitting here, breakers, that Aussie right there. And the Aussie's wife right there too, who's trying to snook in. This is a builders versus breakers talk. Builders, we're the nice people. We like to create things. We like to make kind of like Lego bricks into fun stuff. Breakers are those people who like to tear down our lovely bits and like find holes and go, ha, ha, you did it wrong. And no. Now, the thing is, this is very much on how you can use offensive tools that the breakers are going to use against your systems by yourself to find the holes before they do. I am of the opinion that a lot of you who are sitting here today are probably aware that we're probably not writing secure code or we're not doing it as well as we should. But what kind of bits and tools and advice can you give me today that I can use so when I go home and I can go back to my boss and go, yeah, you remember that really good system we built for a customer? Get legal out because they're going to have a bit of fun later. I want you to find the holes before they get out into the wild. These do not make my job any easier as it already is. If you saw my talk yesterday, I showed you how bad when you're finding open stuff, this is now about trying to break websites. So I like to kind of ask people, how secure do you think your code or your applications are? So I'm going to ask a general and risky question against the audience because we all know the Norwegians love talking back. How many of you when you release a website, go, hi, I've released a website, go look at it. I can see no one here. There's two people there, they're confident, well done you. They're also previous colleagues, so that's kind of cool. No one else? Why not? Excuse me? Penetration testing. I'm going to ask another thing. How many of you do penetration testing? How many of you consider, how many do regular penetration testing? So there's about a couple of hands. How regular is regular, sir? Three times a year. If you went to the bathroom three times a year, would you consider yourself regular? Seriously. Okay? Regular is not three times a year. Sorry honey, I'm going to the bathroom, it could be a while. That's the five minute warning rule, it's a good thing to do. Now I'm serious. The idea is here, regular is every time you push a new piece of functionality, you do a test to verify that it is secure. You don't go, you know what, our applications are secure, our devs are writing code, they're pushing it out onto the web, but we only test it once every three months because every three months we know it's secure. But we're not waiting that long, we're using different tools against you, and the tools we're going to be using are these things. This is, what I'm going to show you today is like Kali Linux, you probably saw a little bit before, this is a Debian based Linux distribution that is armed to the proverbial teeth with tools you can use today to try and test your security. It is free, so there is no kind of excuse of something, I can't afford it. It's free, it costs a few megabytes of bandwidth, but you're good. If we look over, we'll see a small little space alien, that is Nikto. Nikto is a command line tool for doing vulnerability scanning against web applications, I'm going to show you how to use that. Above it you'll see the little spider, that's Arachne. Arachne is a full on web framework, also command line, but does have a nice UI as well. Both of these can be integrated into your build processes and you can run them automatically against your websites at regular basis. The little needle is for SQL map, probably one of the most versatile tools out there for finding SQL injection. I'm going to show you how powerful it is and how bad SQL injection really is, because most of us think, oh yeah, no one does SQL injection, you'd never find SQLI anymore, it's a dead thing. SQLI is still the top security failure for most applications because developers go, ah, you know, how bad could it really be? Then you've got Vega, which is a web scanning tool, another variation of one, and a proxy, and above that then is Burp. Burp is one of the most powerful attack tools you can use once you begin to understand things like HGP transports and you can start messing and really kind of playing with your application and see how it will react under the hood. You can do replays, you can do a lot of other mad stuff. I'm going to show you how to use these applications against very vulnerable web applications today. Now I have to put this in. All right, do not do this against any application you do not own or have permission to test, because you will get asked very, very dodgy questions by men in black suits and black ties and glasses and rubber gloves. You know, they will be looking for any excuse to go, shh, and take you apart. So please only do this against applications you own or have permission to test or have been put out there to test. There are a couple of web applications on the website like HackMe, Troys own Hack Yourself First website that you can use to test your applications. They're there to show you what bad coding and what bad security mistakes look like, but please don't do this against your neighbors kind of like a sweet shop or random government websites as I heard earlier on today. Because apparently they won't let you leave. If you're wondering for the backstory of that, there's a gentleman here called Chris. If you find him afterwards, he will tell you a very interesting story of how he nearly didn't get out of Australia. Now let's praise the demo gods again, because there's a nasty habit when there's people down there that are in the audience that my demos tend to break. You got this Superman. Right, let's go Joe, as I say to my son. Let's go. We have some demos to play with. I'm going to show you what Kali looks like. This is Kali 2. It is again free off the internet. You can play with it as much as you want. I'm running in virtualization mode to just show you that it's just completely portable. If you've never seen this before and you didn't catch my talk last year, there is a ton of tools to play with. I'm going to be doing a lot of the kind of web application analysis tools like here. I'm also going to show you how to do some of the kind of exploitation tools. The first thing we're going to do is we are going to run a thing called Nikto. But first, I want to show you the applications we're going to be playing with. You can go and download from the internet a thing called DVWA. Damn vulnerable web app. This is purposely built to be insecure. It's written in PHP, which we all know is very secure. What it does, it has a lot of training and testing in there. If you want to really learn how bad coding should look, look at the source code of this and then start playing with the app and really testing your tools. There's also a kind of DVWS, damn vulnerable web services, if you want to play with that as well. I think there's a mobile application as well. Over here, I've also got Mutal Day. This is from OWASPT and it's Mutal Day 2. This particular application is really, really good for learning how to do hacking against websites and seeing bad security failures in action. It has a couple of things like hints. If you're unsure of how to do a particular attack, you turn on the hints, it will walk you through it. There's a video. But if you want to say, I want to show this to all my customers and play with things, they can say, right, here you go, have a go at it, see what you can find. You can see here, they've got the different OWASPT, sorry, top 10s. You can see, for example, injection, if we go over here, and you can say extract data bypass and there's also SQL map training. If you want to see, okay, I want to learn how to use these tools. If you find that this talk gets you to a point where you really want to go and play with it, download Mute first. The funny thing is that this is Mutal Day is the, what is it called? Velvet Wasp. It's a wasp about this big. It's mahusuf, it's huge. It's a sting that will knock a horse, apparently. But it's a big fuzzy velvety thing, so that's why they used it. Just really random trivia in the middle of a talk to see if you're still listening. Now, there's other things here. We're going to show you the last one, which is this man. This is Hack Yourself First. It's from Troy Hunt, as we may have heard him before. But this is a fantastic talk, our tool, if you blend it with his Pluralsight course. Because what it does, it shows you how to do with.ness, the same kind of things we do with PHP. You can walk yourself through the different vulnerabilities. You can try and hack it, and it gets reset every so often. But because it's also online, you don't have to be carrying around a web application to do it. And you don't have something vulnerable on your own machine. You can destroy his instead. And right, now we're going to go into showing you Nikto. So Nikto is just as simple as type that. And you get a list of applications and things you can do with it. So here, we have the general options you can use. You specify a target host. So you say, what do I want to attack? And if you just type in Nikto dash dash host, and then put in a URL, it will try and find vulnerabilities on that application for you. So we're just going to do that right now. And let's just go in here. So let's go back. See, Daisy? So we'll just do Nikto dash h, or host, and do HTTP local host host, 4 slash m. Because I know that one exists. And then it will start finding all the bugs. So it's very, very kind of slow. It takes about 35 seconds. Now you can build this into your build step. So it'll just automatically go through the different application points in for you. Now I will warn you, this might throw a couple of false positives at you. A false positive is it says there's something wrong, but it's not wrong. Because it analyzes a response. So if it puts out a URL and expects to get back a 404 and gets back a 200, it'll say there's something working here. Therefore, the application is broken. And you might go, uh-uh, tough. That's not the thing there. I don't want to send it to there. Let's note. So what has happened here is you will see that this application has, for example, the.git,.git, head file was found. What does that mean? Anyone using Git? A couple of people. Good. Anyone know what.git, 4 slash head means? Source control. Yeah, what does that mean? You can download the entire source control head folder. So there's the entire source. Because you will find applications sitting there where the developers haven't understood how to just push the application correctly to the web server, will leave the.git file and forward slash head, and you just go connect to that, download the complete another source code. Now, what happens when you have the complete another source code of an application? Loss of connection strings. I'd be just there going, I can find every single vulnerability because I can examine your code. I don't even have to play with us. I can just download your code. Now, you'll also see other things up here, like, for example, if we go all the way back to the top, one of the first things it finds is, for example, there's Robusta text contains eight entries which should be manually viewed. Ladies and gentlemen, what is Robusta text? It tells your search engine what not to look at, right? What does Robusta text contain? It contains information, but it contains text which is human readable, meaning do not use it as a security feature for your application. Please don't search in the hidden directory. Please ignore the password file. It's a guidance to web crawlers to say, I don't want this indexed. People like me ignore such variations like stop signs and red lights. It is very obvious to just go in and say, I want to go and look at the Robusta text. Have you ever done that? Have you ever just gone to a website and went forward slash Robusta text and see what is in there? The old White House dot gov had one of the best ones. It had 1200 lines in it for everything they didn't want you to search. 551 is about line 27. All right. What we will find in here is a list of different problems you can fix. It takes quite a while. As you can see, not really user friendly when you have a lot of command line stuff. It takes, it's a bit heavy. So I'm all about making things light and easy for myself. I keep going and I keep scrolling and I keep scrolling and I keep scrolling some more. What I plan to do now is show you what you can do with Arachne. So I have to go to a different directory and tools and Arachne. If I do rr dot four slash, no, it's bin, isn't it? CD bin. So here's Arachne. You can download this and run it. Now in Cali 1.06 or 1.10, this was installed out of the box. I'm fairly sure I've got it basically because I'm using Cali rolling, which is not as, it keeps updating itself. That they may have put this in the new build, but they may not. But I have downloaded the latest one from Arachne. You can just go on to the website, Google Arachne, you'll find us. And then if you just do, for example, dot four slash Arachne here, this gives you a list of options you can use. When it turns up, come on. Come on. There we go. I'm going to give everyone. We'll try Arachne web then. So Arachne web is a web application scanner that will run the command line addition of the application. It should be running. Is it running run? Am I doing something wrong here? Dot four slash Arachne underscore web. This worked two seconds ago. Right. We'll ignore Arachne. I'll just tell you about it. Damn you. The Arachne is a web application scanner. It has a lot more features than Nicto. You can specify I want to do SQLI injection. I want to do XSS injection. I want to do different types of scanning. I can validate forms. I can validate post requests. I can also validate cookies. You can validate headers. Now when I ran this on one of my internal applications, it took approximately six hours to go through every single test across 1200 pages. So it takes quite a while. You can multiply it out across multiple different computers to work for you. It makes it much easier and much quicker. So if you want to spin up a couple of different virtual machines, did we work? Yes. We're missing the URL arguments. That means I can do dash H to do the help, I think. So I want to show you what it can do here. So if we look at the different options, I kind of tell people that this is quite a lot of application work here. So if we go here, you can also put in this authorize by in every single request. So if you're being asked by a company to check an application and they want to find out where all this amazing and crazy traffic has come from, you can put an authorize by so that they will understand and be able to filter their logs. Because a lot of applications, a lot of companies what they'll do is they'll have a firewall that have some kind of deflection shield in front of their application to stop all this kind of traffic once they start seeing a spike. Now what you want to be able to do is say, well, I'm going to be coming from this IP address, I'm going to be running these different applications, the tax, and I want you to be able to find that in your logs. So that you can, because if someone is watching and you're running a lot of different attacks, they will pass in their own attacks with you so that they will be then all of a sudden kind of in the middle of the mix and being ignored. So you want to try and do that. There's also a thing here, if I go further down, let's see if I can find it. This is always the part where this one, I have to try and look for it. Where is it going? Come on. There we go. There is a thing for using a browser cluster. And what you can use is, like, specify the number of browsers you want to specify at a specific time. So you can use this even as a load testing application for yourself. Now if we were to run Arachne Web instead, wait for it to power up, Arachne Web is the Web UI version of this. And it runs on localhost on port 92.92. All right. Localhost 92.92. There we go. So what you get here then is a much easier application view and specification. So you can say, I want to create a new profile. I want to scan a new website. And when you scan a new website, it will say, what target URL do you want to use? It will give you the different options. You can specify the distribution, the number of instances you want to run, span it across maybe 20 or 30 instances in your farm and say, I want to get a quick application feedback from and see how it works. Now also, you get different profiles. So if we were to create a new one. Come on. You can audit, for example, HTTP network settings, fingerprinting. You can do all these different types of checks. The security checks gives you the different options you can use. So you can say, for example, I want to check XSS. I want to check DOMXSS. DOMXSS in script context. I can also do response splitting, remote file inclusion. A lot of complex, very complex tasks that would normally be very difficult to set up manually. But being able to just be able to go click, click, click, click the different ones I want to use. Can we test? You can start running very advanced tests very quickly. What's also cool is when you do these scans, they generate a bug list which has a discussion. So if we were to scan, for example, if I just do a quick, if I scan new, and I type in HTTP localhost 4 slash m and just go go. It starts up and says, scan is initializing, please wait. And we get a little one. Do you control plus that? Oops, we have logs mirrors. Great. Typical with logs mirrors. When you start logging it and you can start bringing in comments and you can say, I want to show what's wrong with this application. Here's a bug. So your testers have found a bug in your application. Your developers are going, that's not a bug because that is how the customer wants this. But you're saying the security scanner is saying, well, no, this is wrong. And then you can start building in a discussion and building it into your bug tracker. You can send the results then to, for example, your GitHub tracker or any different other application, Jira, Bugzilla, whatever you're using. And you can then start having a discussion about the security problems in your application. It makes it much simpler and much easier for you to use. This is one of the most powerful web frameworks out there that you can download and work with today. So if you are not using any vulnerability scanner right now and you want to start off, this makes it so simple. And because you get a nice graphical view, your managers will even understand what you're doing. Now, the problem is that I find that when people go into command line, my boss goes, I have no idea what you're doing. But when I put up a nice little pie chart that shows, here's all our security problems, here's the high, low, and mediums, and then you can go, all right, there's the stuff we can fix. This makes it much easier. You can have a dashboard. You can do whatever you want. It makes it really easy for your team to collaborate. Now, I'm going to show you next what's on the list. SQL. I'm going to show you Vega first, actually. Vega is an if kind of a RACNI and Nicto are a bit too complex for you or kind of your, I won't say that in a bad way. I mean, it's more of a case of who are you trying to show that there's a vulnerability or what is your type of tools you need to use? Vega is very graphical driven. So what you can do is point it at an application and say, I want to scan and find out what's wrong. I can see, for example, here, I did it on Troy site. So what I did here is I told him, I went and told him to scan, hack yourself first, and find me all potential problems. So I ran it for a couple of minutes. And what I did here is it came back and said, I have got a couple of different things that are wrong. I can see here I have eight high priority ticket items, 19 low and a couple of info items. So on the high ones, I can see clear text password over HTTP. Now this, as we all know, is bad. Sending passwords over plain text is bad, right? But what this does is it shows you the impact, but it also tells you what you shouldn't be doing and gives you some references. So not just it tells you, haha, you're stupid. It says, haha, you're stupid. But guess what? I have a fix. It's just the application that cares. Now, I didn't know if that would work. Sorry. So anyway, you've got this, the session cookie. It says, for example, here's something that's wrong. You can decide if this is right or wrong because a lot of the times ASP session cookies cannot be right or can come up as a problem. But it says, here's the discussion. No issue with cookie has been said without the secure flag. Here's what the impact is. So this is what I find is very good. It teaches you what's wrong because most of us, we understand, we get a security bug. It comes up with a little kind of like a big exclamation mark, a red dot, and we go, I have no idea why this is wrong. And I have to try and Google it. And after a while, I just give up. But some applications that teach you how to do this and make it simpler, this is one of them. And it's free to download. You can download it for Mac and download for Linux. I think it even comes on Windows. It is really simple to work with. But what I think is really handy as well is it also shows you what your web application is exposing because it does a quick scan and shows you the full structure of your site. So you may be thinking, oh, no one can find this because it's hidden in some little folder. This will find it. It can also use it as a proxy. So for example, if you want to run your traffic through it to see what type of application vulnerabilities it finds, it will do it. Now if we just do a new scan, I can just show you the kind of options you'll get. And let's do the same thing here, localhost 4 slash m. If I click next, the injection modules you can play with are include like SQL text injection, HTTP trace probes, integer overflow checks, format string checks. It's quite an amount of different things you can use. You can also look for response ones like, for example, email finder module, which I think is quite cool because if you use that just on its own to scan a web application, you can start looking for email addresses that are the web applications leaking and use that for social engineering. Then you can see, for example, if they've got insecure scripts includes, which is kind of an obscure one for a lot of people because they may not even know what they're doing, but it would tell you if there's something you can say, I can inject script into this application in response. So if I can proxy or if I can get a man in the middle, this means I can say inject a bad piece of code in the response that comes back. All this, like for example, social security, social insurance number detector. If you want the more ones we're probably not going to use in Norway, but it would be kind of cool. But you even got like X frame options, headers not set, common problems that we can very readily fix in applications. This generates a nice, simple, clean report that you can go to management or you can go to your customer and say, this is all wrong. Here's how we fix it. It even guides you through it. It makes it much simpler for you as a developer and for you to say, we've got something wrong. I know you don't get what's wrong, but it's insecure and here's why and you can start explaining it. Does that make sense? Okay, everyone's nodding. I think because I can't really see you all my light. Right. If we go out of this, I'm going to start showing you something else. Let's do that. Right. Breathe. Good job. Now I want to show you SQL injection. Now everyone goes SQL injection and no one that's not being done anymore. Let's put that to shame. This is the exploit database hosted by offensive security. So I just did a search for injection. Okay. The word injection. I came back with 7,528 results of exploits in SQL I alone. So you can see here as of yesterday, DRAILDB table viewer has a blind SQL I injection vulnerability and Electro Web online examination system SQL injection, WordPress, pro advertising system SQL injection, open source real estate script SQL injection, PHP real estate SQL injection, EduSec SQL injection. And these are from the last month. SQL injection is still a bloody big problem in computing and development terms. So stop playing with needles. Do not do any injection. Do not get done by this. This is so simple that you know, we got, everyone goes, oh yeah, as I said, no, they can't be doing it. Why not? It's still happening because devs are being stupid. And I'm being very honest here because I think that we have been promoting this for a long time. Like since about the year 2000, 2001 when we started seeing injection tax coming on stream, they kind of, we've been saying that this is something you shouldn't do. And people are still doing it. So I kind of want to just reiterate, don't do this. It's relatively easy to fix, it's relatively easy and very easy to test for. So please, please, please check for SQLI if you've got anything that's sending data in there. Now if you want to go find applications that are potentially SQLIable, you use a thing called PunkSpider. Now PunkSpider is a bit out of date at the moment, but this is a vulnerability search engine. And what we have done, as I've just said, I'm going to look for applications on the.no domain and I'm going to try and find SQLI on blind SQLI. And I keep, as you can see here, it even generates the nice URL for me. Now all these sites are SQL injectable. So for example, the one here, VSK.no, if you're actually the developer for this, this is an example of why this is a problem. If we were to do a Vogin group and for example, I think if I just do that, it should take a wobble out of it, maybe not. This one definitely does. One line. Woopsy. This is an example of poor coding. I know it says defaulted ASP and it's a bit of old code, but this is just one of those things that I think that, and this is made by a particular company because I found the same signature across a number of different sites. And this is a very common thing to do. Because like for example, this personalized candy store exactly the same, just doing this, for example, should just knock out and give me a SQL injection problem. Come on. No, it didn't. Oh, they must have, no, it's something loaded again. Sorry, because I remember to do this before. There we go. So you get a Microsoft OLEDB provider. We find this very, very easily because all you've got to use is something like in URL, defaulted ASP, question mark equals. This allows you to use get parameters. Now everyone says, okay, get parameters. We know this. It's actually quite hard to do a lot of this in.NET, but people still manage it. Because they go, I'll just turn off this. I'll just turn off this because I want it to make it work whatever way I can. Great. Well done. You've just made your machine. Are you soft vulnerable? Now everyone's thinking, well, that's fine, but I have to write very complex queries and very complex things to attack a website, right? Yeah. Let's show you SQL map. How many of you are running Windows 10 anniversary edition of the latest bits? Good. A couple of one or two people running the new insider builds. If you are running, you can just install SQL map natively in Bash shell. If you're not, just download Python and then run it. You can run SQL map automatically on your Windows machine. Just Python.exe, SQL map, and it'll work. If you don't fancy running on Unix. So here, SQL map, it'll give me an error saying something's wrong. Of course, because I forgot to put a H. Try help now. So SQL map is a massively powerful tool for generating or finding SQL injection problems in applications. So all you have to do is pass it a URL and it will start looking for stuff. Now you'll see up here, it says G, Google Dork. This means you go looking for a specific Google URL, you pass it in, and then it will start attacking all the websites within that. Because nothing better than trying to do a mass attack on everyone else. It allows you to run it through a tour if you're kind of that way inclined. By the way, kids, if you are going to attack or be stupid and attack someone's website, don't do it from your home IP. No, I'm serious. It's just not good. Anyway. The good one on you. So you can then say, for example, I can use it to bring out enumeration. I can bring out all the banners, all the current users, DB passwords, whatever. And I want to show you now is what I can do with SQL map and say here, SQL map minus H. And I'm going to just bring up Troy Sice if we can find it. And I'll show you what he's been doing badly. OK. Close that. View the P1. I hope your site works today. Oh, look, someone did something fun. This is always the problem with running this site because people are actively playing with it. It just doesn't get reset. And so you take the risk. So I'm going to do this. All right. Snow host, isn't it? There's a minus U. Sorry. And what this will do is warning your friend you're able to only get parameters. You want to try SQL text and do. Did I do the wrong one? OK. Let's go. Wee. Whoa! Woo! Yeah, I'm reloading it. I don't know what it's taking now. Right. So what I'll do instead, SQL map practice, user info. So this should allow me to run. We might just just see this. Shouldn't you have it inside when we do the V12s one? This one. There we go. Now, you always take a bit of fun for yourself. Nothing like having a bit of like, oh, seconds on the stage. So it'll start doing this. It'll say, going off, finding out this is dynamic. It'll start your aistics. It looks like the back end DBMS is Microsoft SQL server. Would you like to skip? Yes. Would you like to do that? No. And then it will start just actively looking for the site and finding out different things here. It'll do statistical time-based analysis. It will tell you all the different pieces of the application. And what we'll be able to do after a while is say, OK, show me what's in the database. How am I going to get all the queries out? How am I going to get all the users to find? It just says, come on. It'll come back. Now, did any of you see my talk yesterday? Great. Do you remember what's the difference in green code and red code? Good. Green code. If you didn't get the joke, go see the previous talk. Come back to this talk and then laugh again. So we can see here that the web server operating system is Windows 8 or 2012. The web application technology is ASP.NES, this particular thing. And the back end DBMS is SQL server 2012. But if I was to do, for example, dash dash banner, what it'll do is it'll get the banner of the application and say it's Microsoft SQL Azure or TM12.0.2000.8 running on this particular time for copy of Microsoft cooperation. And this is just because there's a SQL I in this. But I can also say, for example, let me just get back out all the tables or the columns of, for example, I can even dump out all the data within the database if I was so inclined. Because once you have all this type of application access, you can do whatever you want. Now what I know that Troy has done here is I know he hasn't gotten admin user because he's not that stupid. Because if he was running an application as administrator, this would be very, very, very bad. Because then I would be able to upload a shell or I'd be able to download, add my own users, for example. But if I was to do, for example, tables here, it'll run this, it'll start giving out applications, it'll say here's this will take a little bit of time. And what we'll start to do is bring out all the different tables we'll be able to see. And then we can start even querying the tables and adding our own queries to do stuff. But this, as you can see, doesn't take a lot of effort. It just takes a little bit of time. And if you want to go and find out how bad someone has screwed up, this is a very good way of doing so. Now, if your application is SQL injectable and you end up like getting this, you'll see someone will be able to hand you back your database. No problem. Anyone ever kind of been caught by SQL? I can't see any hands. Is anyone? Yes, Chris, thank you for being nice. There's at least one person doing it. You've been working in the business a long time, though, haven't you? Nineteen years. There's a common thing. When I was teaching in university, we used to give students, they'd have to build a hotel reservation system. Okay? Now, we could always find the people who had, would understood SQL injection quicker than everyone else. Because in R&D, you get things like O'Driscoll and O'Connell. They're O-apostrophe D, O-apostrophe C. Those guys figured out SQL I very quickly. Because that poor chap was inside going, oh, this doesn't work. I would escape their data. The first thing I do is I press an apostrophe and it, kaboom. So this is a very, very simple way to query out a database, get all the data out, and then say, okay, once you have all the data, and if they're not storing their passwords correctly or they're not hashing their passwords correctly, you have all that. A lot of the major hacks in the last couple of years have all been SQL I breaches. Because someone has done something very silly, realized they've kind of went, ah, it's okay, no one will find us, and then someone runs an application like this. The most common problem is that when we have applications that are done, as in they're fully baked, the product is finished, and they've gone into maintenance mode, people start ignoring these security problems. Okay? Now, what I want to show you now is what we can do. Get requests are kind of easy, right? It's just, oh, look, we can see all the parameters. Yeah, yeah, you're not being really hacky. You're just pressing the apostrophe and you're making it go, boom, nice. Let's do something that shows you what happens when we intercept the traffic and start playing with it. So we're going to fire up Burp here, and Burp is a proxy. Any of you familiar with Fiddler? If you're not a web developer and you're Fiddler, if you are a web developer and you haven't used Fiddler, you're doing it wrong. Fiddler is a proxy for allowing you to do traffic and interception. Burp is kind of a bigger version of this. It has a lot more applications or, sorry, bits into it, but if you're not using Burp and you want to use Fiddler, Fiddler does a lot of this stuff the same way. So what we're going to do here is we're going to, of course, there's a new free edition. What we're going to do here is proxy our traffic through Burp and start intercepting things and changing. And then we're going to show you what happens when you think, oh, just because I've got a post request, no one will figure out that I can't do SQL injection with post. Yes, you can. So what I want to do first is go to the proxy. I said intercept is on, so I now need to go back and I'll close you up because this caused the problem. I'm going to go back in here and I'm going to set my preferences to go through that proxy. So I know it runs on port 8080 and I'm just going to do click OK. So now if I was to go to here and I close you to, if I go back here and I go to, for example, SQL, I just go and I view someone's blog. All right, when I click on that, Burp has now got a request. So you can see here what the request it has found and I can just forward that one. OK, forward again. And I'm going to click intercept is off, so I just want all the application just to forward all the different application, the information onwards. So this allows you to stop every single request that's the browser sending between you and the web application and have a look at it. So you don't have to kind of get this mad thing. You can just say, this has gone wrong, this has gone wrong, here's something different. But what I want to do first is see if the application, if I can do something with it. So I've got to choose an author here. So I'm going to do it to admin and the view of blog entries and it brings back some information. Now if I put on intercept on again, I go back up here and I say I'm going to choose a different author and I'm going to use Adrian, view blog entries. When we go back here, I can see that the request looks like this and I can see here this author, Adrian, and view someone's blog and that's what they're doing here. So if I right click here and just go send to intruder. Okay, and I'm going to drop all the, I'm going to take intercept off. On intruder, intruder allows me to attack the different targets and positions. So I'm going to use this poster quest as a template to attack this application. So here in the positions, it highlights automatically the different ones that we can use. I'm going to clear that. I'm going to highlight Adrian. Every time I say that, I really want to do it in the rocky voice. You know, add, okay. I know I want to do the payloads. Now we have different payload types. You can set it to, you have for example, there's different options for it, but what I'm going to do here is I'm going to load a SQL injection list. Now this comes in the box already. It's inside the W files attack list. But if you can't find a list or you want to find, you don't know how to write SQL injection or SQL attacks, go to the big list of naughty strings on GitHub. That has an exceptional list of different characters in all different languages that allow you to attack your application. So what I'm going to do then is just going to start attack. This is a free edition of Burp, so it will slow down. And what we're going to do here is watch what happens with the different results. Okay? We're just going to see here. If I go on the length, something is different with some of these. So I can see then if I open this, I can get back the response and the render. I want to see if there's a bug in this one. This looks okay. That's fine. But if I was to do this one, what is this show? Oh, it came back with an error. So if it's got this, we can see there's a SQL injection problem now. We can see there's a bug. Because if I put in an apostrophe, which is we all know is the first key to look for a SQL injection, we find that it does have an issue. Right. So what will we do now? How will we pass this to SQL map? What can SQL map do if we got this? So you can use the post request. You can just say we'll close this out. Attack. Pause. I'm going to bring this back out and close it. Okay. This will stop the current attack. Okay. We'll go back and use our HTTP history. And we'll see this is the one I want. Yep. I'm going to just copy out all this and use leaf pad here. And I'm going to paste it into leaf pad. Close it. Yes. And I'm going to put it in the root directory. And I'm going to call this post request to just in case.txt. And we're going to click save. Now back in SQL map. If I do SQL map dash dash help. Dash H, dash HH I think. SQL map HH gives you the full on list of the different problems here. Or sorry, different options. And when we go back up, excuse me, when we go back up, come on. See, we've got a lot more options here. We can do a thing here with a request file. So we can pass in the post request and make it work. So what I'm going to do here instead of do SQL map minus or post request, post. It would help if I was in the right directory. SQL map minus or post request to.txt. OK. I'm just going to do that and let it go off and find. So it's now knows that I am running. It says your web application, web server operating system is Linux Debian. Its web application technology is Apache 2.4.18. And we can see the back end dbms is 5.0.2. Now if I was to do, for example, like this and say, I want to bring out the users. So the users that are inside in there is root. And I can see all the different options I can have that I've got different here's and bits and pieces. So I can start then, for example, if I wanted to do, for example, passwords. Would you like to create a new? Would you like to use dictionary pass against retrieve passwords? Yes. Do you want to? No. And let it just go crack the passwords for me. This is running on a single core machine. If you've got a larger machine, it'll take a little bit, it'll have a lot less problems with it. But the idea is this is how difficult it would be to kind of get the passwords out. And what happens when you have a username and password on the database? A lot of bad stuff. Now, ladies and gentlemen, this is just an example of the different types of tools you can use today to break your applications. You're still going. Oh, there it is. So the pass for NDC at localhost was pass1234. Nice. Well done, Niall. I mean, system administrator. Now this is a single core machine running on the processor. If you've got a GPU, they're exceptionally good at this. Now there is a guy, they build large boxes and they have great names like Brutalis. And they come with like 8 or 18 Titan 1080s in them. Have you seen the new Titan 1080? Are you gamers in the house? That massive card, which is like it just has a small nuclear reactor behind it. Yeah. And it only costs like 7 or 8,000 Kroner. Now imagine 10 of these running trying to crack your password. How long do you think your password would survive? They estimate that they can crack about 350 billion per second right now of NTLM hashes. So if you think your password is strong and it's less than 8 characters or something like that, think again. It'll probably be cracked in under a second. Now I'm just showing you examples here of how you can use different types of tooling to get, build it into your application or sorry, use these types of tools to kind of get our time out. These you can use these tools to really kind of check your systems. Because many of you have probably gone, I don't know what these are, how hackers are getting near my application. These tools are what they're using. And these are only the basic ones. There's a lot more advanced ones out there. And you can, the SQL map is one of the most simple of the tools there you can use today. But it is exceptionally powerful. Now one of the other things I want to show you here is like for example here, if I type in tables, this gives me all the list of the tables that are in the application. So you know, I don't even have to query, I don't even have to run around and try and figure out things. But I can even say, show me all the app, or show me for example here, columns. So there's the list, so I, you know, it's, you can just go, oh, here's all the different application, here's my database map, here's what I can do. And if I was to say for example, all, this will bring out the entire data and dump it. So you can just do colon, or sorry, grade on grade on, and all of a sudden you can send it out to a text file and examine it at your own leisure. Now that's kind of the end of the demos. Let's go out of here. Everyone's a bit tired. Whoops. Try again, Niall. Hang on a second. I always hate this part. You can get the links and the bits on www.certesandprogs.com. You can download Kali today. I don't recommend doing it in here, because all the other attendees will probably give out to you. But it is about a, it is a 2GIG torrent file, and then it expands quite a bit bigger. But this file, this application, or sorry, Kali on its own, works in a VM, it runs in virtual box, it can run in Hyper-V, whatever you want, or you can install it on your app, on your system directly. It's a very cool set of tooling. All these different tools are free. There's nothing here that's going to cost you money. And with the new versions of Windows, you'll be able to, with Bash Shell, for example, you will be able to run these natively on your own boxes. You won't have to use Kali for this. You can just get the different, from the different repos. So I'm going to open up the floor for any questions. Anyone at all? Did I can't see? Anyone got a question? Come on. We've got five minutes at least. And I can wait here. You're not, yes, Chris? What are the implications of having JavaScript access to a box? Well the thing is, once you have, for example, XSS, and you can persist that so it happens every time, Troy shows a very good example of stealing someone's cookie. And when we steal someone's cookie, it's not that they get hungry, it's that we steal probably their potential to log into somewhere. Instagram had a side swipe problem where they sent out their authorization cookie over HTTP. Meaning, you could copy that cookie on HTTP. Now how difficult is it to recreate a cookie in a browser? Anyone going to say not very hard, Nile? Not very hard, Nile. Thank you. Not very hard, Nile. It's you go in, you recreate the cookie. And I did this an example. What had happened was, really funny story, my sister-in-law had come over, she wanders into the house, my pineapple, I'm testing some stuff, her machine just, her phone, iPhone is like, woo, Wi-Fi. And it was NS Bay interactive, you know, the train. And she jumps on and she was on Instagram just posting some stuff. And this cookie flies across with SSL strip and goes, huh, this is kind of cool. So I copy the cookie from the Wi-Fi Pineapples UI into my browser and I log in as her. And then I post up a picture for her and she goes, and all I hear is my wife going, Nile! Yes, dear. Stop it. Okay. There's always a good thing when your office is well away from the living room, you get a good five-second head start to get out the door. But to answer your question, Chris, if you've got persistent XSS, what will happen is, I can't trust that the JavaScript you're serving to me is correct. I can, for example, how many of you think that jQuery is epic and good? And all JavaScript is good. How many of you think that if I was to change your jQuery and just slightly adjust it and add logging in there so that it emails me something? I can add any type of dodgy script I want and you won't even see it because you'll, how many people view the source of the app, of the website they're looking at? How many people turn JavaScript off in their application? Ever tried turning JavaScript off on the web? Everything stops. XSS is probably more dangerous than we kind of, like, you know, oh, you just put up a message box. Wow, good, well done. But if I can put in something that's logging, every time you type in a piece of information, it just sends out keystrokes to me. All your passwords belong to me, baby. You know? Good question. Last, anyone, any more questions? Going once, going twice? Go to Troy's Talk! Thank you very much! Hello.
Breaches, breaches everywhere. But shouldn’t the dev team know to check for crazy inputs? Well, maybe they do, but maybe they don’t. In this session we’ll look at website fuzzers, proxies, and other tools that can be used to test your site security and provide insight into where you might need to focus your development efforts. We’ll discuss open-source penetration testing tools, including usage and benefits.
10.5446/51867 (DOI)
Hello. Hello, everyone. Welcome to the Node.js presentation. You don't know Node. We will go over five core features. It's not going to be about a fancy framework, cutting edge advanced stuff like promises or anything like that. It's going to be about the core fundamental, the basics. So even if you work with Node.js, some build tools for friend development, you might not know about these core features. So let's get started. And this is a hands-on presentation. So I want you to take out your laptops, download the code. I have it on GitHub, the code and the slides and the PDFs, and follow the demos that I will be showing you on your computers. And turn off all the IMs and slacks and notifications. They're doing really well there. Maybe we can beat them in terms of making sounds later. Can I have some little bit more volume in my mind? Thank you. So take a picture, get the slides and follow me with the code examples. So first of all, I want to remind you why we're here, why we're coming to this type of conferences and why we're doing software. So I want to put this big statement. We write software, we write apps to make our lives and life of other people better, basically, because software changes lives. Software is pretty much everywhere. So is that my computer? Did you have a different adapter? Oh, the different, the power. Do you know where is it? Oh, okay. That would make sense. Yeah. No. 40. Let me try to mirror. This is all summing up. No, too unworth. It used to be good, right? Let me try. Yeah, it's all like that. You can try another cable, but it's so strange because now it's because it's perfect. The picture is good down there. It was completely stable in the time there. But then, on the slide number 3, the picture started to fly on the project. So, it's stable. What? The fuck? Yeah, this is stable. But this, I don't have a picture now. Yeah, it's stable. Yeah, it's stable. So, that's actually perfect because you have time to download the slides. Yeah, it's stable. Yeah, it's stable. Yeah, it was there. Now it's there. 50. Thank you. Okay, so, start over again. Welcome to YouDon'tKnowNotJS. This is a hands-on presentation. And in case you missed it the first time, this is the repository where you can get the slides and the code. So, this presentation will have a lot of code examples. I want you to have the code so you can later refer to it. So, we're building software and we're trying to use better technologies, better frameworks to improve the lives, improve the community, the lives of our loved ones, the lives in general of other people, right? Software is becoming everywhere. And with this big kind of a goal, I will introduce myself. I published and wrote 12 books, not counting Chinese, Korean, and Russian translations. The most popular ones are Practical Node.js and ProExpress.js. You might have seen them on Amazon.com. They were on the first page if you search for Node.js. My new book, React Quickly, it's coming in a few months. That will be about React.js. You can follow me on Twitter at azad underscore co. I tweet about JavaScript and Node.js mostly. And I also have a blog. And if you have questions after this conference, you can reach me at my email. Hi, at azad.co. I work at Capital One. It's one of the US top 10 banks in the financial industry. I'll talk about it more in the next slide. Before that, I worked at startups, US federal government agencies. I would like to highlight my experience with DocuSign and StoryFi. That's where I got to use Node.js at a pretty relatively large scale. And I saw from my own eyes that Node.js is, how Node.js is beneficial in terms of having a generalistic approach and in terms of performance and having a better developmental experience. They also teach at Nodeprogram.com, where you can find live in-person trainings in Node.js as well as online trainings. So Capital One, it's in the top 10 US banks. We're also in the UK and Canada. And in US, we're famous for our Visigoth commercials, but most people think it's Vikings. Some of them are funny. So why use Node.js? So let's start with basics, right? I want to give you some ammo. So when you go to your work on Monday next week, you have something to tell your coworkers and your team leads in order to convince them, hey, maybe we should use Node.js for our next project. So the biggest selling point of Node.js is its performance. So basically, it's faster. And it's faster because it optimizes the input and output bound processes. So most of the times, input and output is the most expensive tasks that we have compared to CPU bound tasks. So if we can optimize for them, that will give us a good boost in performance. So Node.js has this thing, it's called non-blocking I.O. How many of you already familiar with this term? Raise your hand. OK, so like half of the audience. So for the rest of the half, this will be beneficial. So non-blocking, this diagram shows you high level overview how it's implemented in Node.js. So we have the thing it's called Event Loop. Event Loop, it's always looking for something to execute. So basically, it's never blocked or almost never blocked. And it allows our system, our Node.js-based system to process multiple requests pretty much in parallel, pretty much at the same time. And then Event Loop delegates those tasks, those heavy input-output tasks, like reading from a file system or writing to a database, or the most popular thing that we like to do when we're building web apps is to make HTTP requests to APIs or third-party services. So those all time-consuming tasks, Event Loop delegates them. Let's go a little bit more low level. This is Java code. How many of you Java developers here? OK, just about 20%, not that many. Most of the time I'm teaching Node.js to Java developers. But you can guess what this means, right? Thread.sleep, it's putting the entire process to sleep. So nothing will happen on this particular thread unless the timer expires, right? So the entire thing will be basically useless. OK, we will see step one, step two, and then step three. Contrast that code with this Node.js. So now we have fancy callbacks. So set time out, it takes a callback, which is a function definition. It's anonymous function definition. It doesn't have a name. And basically tells, OK, schedule that in the future. Don't execute it now. And the future will happen at one second, so 1,000 milliseconds. But the good thing about this code and this Node.js implementation is that set timeout is not blocking. So Node.js, this process will continue to execute. So we will see step one, step two, and then after one second we will see step three. OK, and then if we have more code, Node.js will still continue to execute that. So the result of this would be step one, step two, and then step four, not step three. And then after one second we will see step three and step five. So if you're familiar with how callbacks, Dread, jQuery, or promises in Angular framework works on the browser, this is a very similar approach basically. We're scheduling something in the future. But the benefit is that our process is not blocked. We can process a lot of requests. So putting processes to sleep, that's not very useful. Most of the time we are using, we're building web services. But that analogy, set timeout, that's exactly how Node.js is implementing a synchronous input output. So it could be writing to the database or it could be making a request. Let's take a look at this diagram. So this is a blocking system, maybe written in Python or Java. And we have four clients. Please take this, it must be important. We have four clients, they submit requests. So the client at the bottom, he or she, have three requests. But each request is blocked by the previous request, because it's hitting that one thread. Compare it to standing in line for a coffee. We have that pretty cool Twilio booth downstairs at the expo. I saw a lot of people standing in line, 20 people, maybe 10, 20 people during the breaks. So that's your typical blocking system, because you have to wait for the people in front of you to order your drink and to get your drink. In Node.js we have a non-blocking system. This is a diagram, again we have that event loop, which is delegating the processes. And event loop is always available, or almost always available. Compare that to the same Twilio booth with a coffee bar, but they have that option where you can text, or Facebook your drink, and then you can walk around, talk with other people, and they will text you back when the drink is ready. So this is much more enjoyable experience than waiting in line, and not able to do other things. So the texting to get your coffee, that's non-blocking, that's asynchronous. That's what we usually want. This quote is from CordingHorror.com, it's a pretty famous blog. I would recommend subscribing to it. So basically the blocking system, they have to be multi-threaded, because each thread is blocked, but multi-threading could be very dangerous if you don't know how to use it. You can get racing conditions, you need to know how to synchronize the data between multiple threads. You might have deadlocks, etc. It can really blow in your face. And yeah, blocking system, they have to be multi-threaded. So Node.js, by design, the designers of Node.js, the original creators, they decided, we're not going to create multiple threads, we just have single thread. And this is actually a beautiful thing, because with a single thread, you can avoid all that complexity and work in a pure asynchronous and non-blocking environment. And that's one of the concerns people tell, oh hey, Node.js is single-threaded, it's not going to perform well. But that actually forces you to think about scalability early on, and that'll show you how to scale your system as well. But before we do that, it's still possible to write blocking code in Node.js, right? In Node.js, it's a platform that's not going to prevent you from writing the blocking code. So you need to be very careful how you implement your Node applications. For example, this is a CPU intensive task that iterates many times, and on my machine it was taking from 100 to 1000 milliseconds. Obviously, it's a bad code, you don't want to do that in Node.js. Or at least you don't want to have one process to do CPU intensive tasks. Sometimes, most of the times, our blocking code is not so obvious. Most of the times, it's buried under some module. So FS is a file system module to work with file systems and directories. And in this example, I'm using synchronous methods. So we don't see that for loop, which revolves, but I know that it's a synchronous. And the results would be accounts, then hello Ruby, and then IPs, and then hello Node. So just as in your normal traditional blocking language like Python or Ruby or Java, it would be executed in the order in which you have statements. So no surprises here, but basically it will be a blocking code. This is a non-blocking version of exactly the same functionality. So FS has two types of methods. They're like siblings. They have eval siblings, which is synchronous, and they have the good siblings. This is a good sibling, asynchronous method, read file. And immediately you can see the difference in terms of how I implement it. I use callbacks instead of using an assignment. I cannot use an assignment because the data is not available. The data will be available only in the callback. So the contents will be available only in the callback. It will be defined. But then I might have a racing condition between what data I will have the first, accounts or IPs. But the data will be always after displaying hello Ruby and hello Node. So far so good. And feel free to ask questions in the middle. We're not going to have a formal Q&A at the end. Why bother having asynchronous? Why bother? So the question is why bother to have asynchronous version? Because without having asynchronous code, you will be blocking your process and you will lose that benefit of Node.js. Sorry, why bother having the synchronous? Why bother having the reverse? So why bother to have the synchronous code? It's there. Sometimes in a very edge case scenario you might need it. For example, you're starting a server and you're reading from a configuration file. So it doesn't make sense to start the server at all if you cannot get the configuration settings like port number or IP addresses. So in some edge case scenarios, it's there, it's for you. But most of the modules, they implement asynchronous interfaces. So Node typically is much faster than other platforms. Now all of the benchmarks, they don't really test your business logic. So we don't have like exact data, but mining total experience, working with teams and reading blog posts from PayPal, eBay, other big companies, Uber also runs everything on Node.js. Usually it tells that they have a dramatic improvement in performance, even like order of magnitude, like 10 times or 20 times faster. So that's all good, right? We get the performance, our company spends less money on the servers. That's all wonderful. But how many of you actually hid the performance limitations using traditional languages like Java or Python? So probably not many, right? I'm in the same boat. So most of the apps I'm building, unless it's for DacuSynos, or StoryFi, I don't experience that issue. Like remember Twitter in the early days, you would see that whale, so they had that performance issue, right? But I'm not experiencing them. So for me, my personal, my favorite benefit of Node.js is that you can have one language across your entire stack. So we have JavaScript on the browser, right? It's here to stay, it's not going to go away. We have a new standard now, we have six and seven, et cetera, so it's evolving, it's becoming better. Then you need to know another language, like Python or Java, for the backend. And then for database, if you're using traditional SQL database, you need to know SQL, which is a complex language in itself. You have all those joins, and you need to know the syntax, right? So three languages just to build your application, but then you need to know HTTP, you need to know CSS, you need to know HTML, and so a lot of other technologies, right? Having Node.js on the backend eliminates at least one language. And then if you use a Node SQL database like MongoDB, which I really, really love, then you can eliminate one more language. So having one language allows me to think faster, because I'm not switching the context. And I'm not making stupid mistakes like, oh, I forgot a semicolon here, or I need a double-quad instead of a single-quad, because it's a different language. I'm reusing my code, so templates, utility libraries from the browser, I can reuse them. And then I'm remembering those interfaces, I'm remembering how to call a method faster, I'm remembering all those methods and properties, because I'm using them twice more frequently. So that's a good thing. That's my personal favorite thing about Node.js. And for you, if you're learning Node.js, but you know JavaScript, browser JavaScript really well, it's probably take you like a weekend to get started, to get to the comfortable level with Node.js, because most of Node.js, it's your browser JavaScript. So we have the same array methods, the same string methods, so it's very, very, very great. But Node.js is not triple equals browser JavaScript. It's not completely same as browser JavaScript, right? So what are the difference? So we don't have window object. Obviously, there's no windows on the server side. We also have more power. We can get the environmental information, we can work with file systems, et cetera. So there is this thing that's called global. If there are lowercase or capitalized, that will give us a lot of properties and a few methods, those extra benefits, those extra things that we're lacking in browser JavaScript. For example, filename or dirname, that will give you the absolute path to the filename, the currently running script, or it will give you just the path without the filename. So it could be convenient when you need to know what is the location. Then we have native modules in Node.js. We don't have modules in browser. So even if you're using, yes, six modules, you have to rely on some transpiler like Babel or System.js. So in Node.js, the problem is solved. It's common JS syntax. It's beautiful. So module, that's how you export and requires that how you import a module in Node.js. It's all native. A few other things. How do I get a command line interface input? For example, I'm building a new grant or webpack tool. How do I get information about my system, like versions or platform? How do I read environmental variables, for example, API keys or passwords? There is this thing. It's called global.process, or you can just access it by process. Every global property is accessible without the prefix. You can either type explicitly, global, or implicitly you can just call that name process. And it's a really big object. I will just highlight some of the properties. For example, PID will get you the process ID. Versions, you can get versions of pretty much everything. For example, like V8 engine version. You can get the architecture. Arc V, that's how we get the input from the command line. And EMV, that's where your environment, that's where your API keys, you can access them once you're ready to deploy the production. So don't store your passwords in your source code. And don't create config.json, because you can use environmental variables. Oh, we have issues again. The ghost of demo. Okay, so it's a good thing to have some type of monitoring when you're building a web server especially. So you can get uptime. It's how long your process is running, and then the memory usage. CWD, that's the current warranty directory. And then we can exit the process, or we can kill another process from Node.js. So I hope you can start understanding that Node.js gives you more power. Obviously in the browser, we don't have that power because you don't want to go to a website, and that website can create a file on your file system, right? Okay, so let's move into Node.js, some of the Node.js patterns. How many of you really understand and like callbacks? Oh, wow, it's like 40%. Really cool. I like callbacks until I hit callback hell. There's even a website. It's called callbackhell.com. And once you're inside of that nested pyramid of doom, or sometimes it's called nested approach, it might be tricky to really understand what is happening. So callbacks could be, especially in large projects, they're not very scalable in the developmental sense, meaning when you add more developers on a project, it can become harder to understand that code. And sometimes nested callbacks can lead to problems. So we have advanced patterns like promises and generators in the sync await. I'm not going to talk about them because they're not part of the core. So core modules, they don't use promises, right? Version 6 of Node.js, it comes with promises without any flags. You have the promises, standard ES6 promises. But when you use modules, you have only two options, callbacks or events. So let's focus on events because callbacks, most of the people are familiar with them. Basically events, they're your standard observer pattern. So we have a subject, we have observers or event listeners, and then we have someone else triggering those events. This is how you create it. So you require events, which is a core module, you don't need any NPM for this, and then you create a new object. So this will be our subject, emitter, and we're using prototypical instantiating patterns, so we need to say a new keyword. And we define our event listener, our observer, basically. The way we define it is very easy. We provide a string, and then we provide a function. So when the string is the name of the event, it will be triggered, this function will be executed. And we can have multiple events with the same name or with a different name as well. So let's consider a little bit more interesting example. So I have two events, but they have the same name. So what will happen, they will be triggered in the order in which I define them. So the last line, immediate knock, will basically trigger that event, and then I would see who's there and go away. So immediately you can understand the benefits comparing to callbacks. In callbacks we have just one function that is executed at the end. Here with events you can have multiple functions. You can execute them in the middle, in the end, it doesn't matter, it's up to you. So you get more power. Now, this is a more realistic example. Let's say we have a module, it's called job.js, and the code is in that GitHub repository. But the job needs to be customizable. So at the end we want to put some custom logic, but we don't want to hard-code it in the module. We want the consumers, so developers, who use the module to give the power to customize what will happen at the end. So we can use events for this. The way we do it, we basically emit an event, and then in our documentation we tell developers, if you want to have that custom ending, go ahead and implement an event listener because we will be emitting an event. And then developers, let's say I'm a developer, I want to implement the weekly job. So it's a weekly newsletter, and then I'm creating this event listener because I know the done will be emitted. If I don't have that event listener, that's cool, okay. That's also okay. That's also cool because it's not going to crash. It's not like, oh, I'm defined, right? So this is a powerful pattern, how you can basically create modules and abstract some of the code by still giving the flexibility to developers, developers who use your modules. Some of the useful methods, for example, we can remove event listeners. We cannot do that with callbacks. We can specify, hey, execute it just once, right? With once. It's another advantage. Some of the other node patterns, I wrote this blog post, node patterns from callbacks to observers. I also talk about modules and have their singletons to check it out. Okay, so let's move on. Problems with large data. So it's slow because you have to wait for all data to load. There's also buffer limitation in Node.js. One gigabyte, that's it. You're done. The last one is just a joke. We can use streams. Streams are, again, it's a core module in Node.js. It allows us for continuous consumption of the data. So we can transform it. We can do something with the data without waiting for the whole chunk of the data to load. Okay. We have four types of streams. We have writable, readable, duplex and transform. Duplex, it means it's both writable and readable, and then the transform means it's just transforming. Streams already inherit from Eventimator, so we can use that interface to implement our streams. Streams are everywhere, especially in the core, not just core. We can use streams for HTTP requests and response. We can use them for standard input and output for reading and writing to a file. This is an example of a readable stream. I don't want to have to name one of that. And if you're following on your machines, it's std in.js. Can you see it? Probably not. So basically each line, each new string, it's a chunk of data, and I'm processing it as I go. This is how it looks like in the code. So basically process.std in. I get my readable stream, then I execute Resume and set the encoding, and then I'm listening for the onData. So remember the previous slides with the Ventimator, so very similar. We create this event listener onData. So onData will happen each time I have a new line on my terminal, and I can start processing that text. And then once I'm done, it's good to have also a listener for the end event. Okay. So demo, we did this. For readable, we also have a different approach. We can use read method. So read method, it's synchronous, but it's okay for it to be synchronous because it's really just a chunk of data. It's just one line. So imagine that large data file, we're processing it line by line. And when it's null, that means that it's over. Basically, the stream is over, so we can have a while loop and compare it to the null. Example of a writable stream, so std out, that would be the output of your node script. And there is a write operation. We use it like this, process.std out.write. So this would be an analogous for console.log. In fact, console.log is using something similar. So we're writing to our output stream. What about HTTP? Because most of the times we're building HTTP servers, right? That's where streams really, really shine because we don't want the consumers of either our API or our web app to wait for too long, right? So in this example, I have a server and I'm getting that request object. So your typical hello world type of server, very minimalistic. But then, instead of waiting for entire data, I'm creating an event listener on data and transform. Let's imagine it's a function defined somewhere else. Maybe it's a module. But the key here is that the chunk is available right away. I don't have to wait for the entire one gigabyte file to load. And then at the end, I can parse it. I can do something else with it. There's also pipe interface. Pipe is very similar to the pipes, double pipes in your Linux terminal command or command prompt in Windows. So basically, we're taking one stream and piping it to another, then piping it to another. So we're passing the data. So r.pipe, z.pipe, w, we'll read from a file, compress it, and then write to a different file. All without having any buffers. Speaking of buffers, buffers is a special data type. We don't have it in the browser JavaScript. There is a rate buffers in the S6, which is kind of similar. And the way we create buffers, we can execute that from. This has been just recently changed in version 6. And version 5, you would use a different approach. And I have a link to the documentation as well. If you're new to buffers, sometimes you might see, when you're expecting a stream, you might see something like this, which looks like an array, but it has numbers. That means you didn't convert from buffer to a stream. You can easily convert by specifying and encoding. So by default, when you're reading or writing to a file, or it's a request, it's a buffer. Especially if you're not using any frameworks, it will be buffer. So you need to convert it. This is how you can do it. By default, if you just say toString, it will use UTF-8. So remember FS from a few slides back. So when we're reading from a file, the data, it's a buffer by default. Without this example, it doesn't have the encoding. Okay, so server stream, it's a small express JS server that I've written. And I'm launching a node space server dash stream. So what it's doing is basically it has two endpoints. One endpoint is using stream, and another is not using stream. So let's go to the one that is not using stream. And it's a huge infographic about Capital One. It's like 8 megabytes or something like that. And I'm going to the DevTools. I'm going to Network. And I'm clicking on the response. Actually, I'm clicking on the header, and I don't know if you can see. So it's 6.57 milliseconds. So let's remember that number. So I'm going to go to Network, click on all. By the way, I love DevTools. Such a nice thing. How many of you like DevTools? Yeah. So it's 1.4 milliseconds. So response time is shorter. Sometimes I get it order of magnitude shorter. Yeah, so right now it's 0.42. Okay. And this was, this is 6 milliseconds. This is 0.4 milliseconds. Okay, so response time. This is not like entire infographic. That's only the first byte. But that's enough for users to start seeing something, right? So that's the benefit of using streams. I also have two more endpoints that they use a different interface, but that is the same. Streams are faster. Some good stream resources. There's automated workshop, which have exercises you can go through them, and it will test your answers. So it's a great tool to learn streams. And there's also a stream hands book. Okay, so going back to having a single system, right? Node.js, it's single threaded, but I like to flip it around and make it an advantage versus having it as a disadvantage. How you can do it. So basically it forces developers to think about scalability early on. Most of us, we build microservices, we build distributed systems, right? So if you're using Java, it's like, oh, it's a multi-threaded, I don't need to worry about it with Node.js. Oh, I'm starting a new project. I need to start thinking about scalability, how it's going to work across multiple stateless servers early on. Luckily, there is a core module, it's called cluster. The idea is that we have multiple processes. So we can spawn multiple processes with this core module. And one of the processes, it will be like the main thing. It will control everything else. And then we'll have workers who will do the main job. So most of the times it's web servers. So workers have web servers, and then the master is controlling those web servers. And typically we have as many processes as many CPUs we have. So it's not like a hard rule, you can have more, but you get low diminishing returns. So basically to maximize the max CPU from your machine, that's the best practice to have the same number of workers. This is the code. So as you can see, it's very easy. All we are doing, we're having nif condition, cluster.isMaster and we fork in new processes. So.fork will create a new Node.js process using exactly the same file. So this code is for both master and the cluster, the worker. And then cluster is worker, that's where our web server code goes. It can use express or just core HTTP, it doesn't matter. And number of CPUs, that comes from the core module. It's called os. So this is an example of having multiple processes. So Node, space cluster. Oh, I have something else running. So let's just kill everything. Okay, so as you can see, I have four workers and they have different process ideas. So that's a proof that it's different processes, right? And all of them, they're listening on the same port. So this is like a load balancing. Basically, we have four processes listening on the same port. Another benefit of this is that you can do zero downtime reloading. Basically, you want to restart, you have new version of your server. If you have multiple workers, you can kill them one by one and replay them one by one without losing the availability, right? So again, that forces you to think about scalability and creating better systems early on. So I'm going to a new window and I'm using this tool. It's called load test. It's similar to Apache AB or Jmeter for Java. It's a load testing tool, but it's reading in pure Node.js. And I will be hitting my local host. I will have 20 second time out and I will have max 10 concurrent requests. So now I'm going back and I see that different processes respond to my load testing requests. You see the numbers change, right? And it will take some time, but it will give us some numbers. If you're running on your machines, you can tell me your numbers. We can compare who's machine is faster. So basically, I was able to do 800 requests. When I'm doing it on Mac Pro, I have 2000 requests. Okay. So there are other libraries for doing a similar thing, for having multiple processes. So the advantage of Cluster is that it's core, but other libraries, they have more features. For example, PM2, strong loop cluster control. I recommend using one of those unless you want to build a new PM. PM stands for process manager. Some of the advantages. Again, you can have zero seconds reload downtime. So basically, your app will be forever available, forever alive. Another advantage of PM2, you don't need to modify your server code. So remember that if else condition in the cluster example, so in PM2 is just your server code. You don't do anything with it. And it also give you a nice... Give you a nice... I'll show it to you. It will give you a nice interface. So basically, it's a table. It shows me how many processes they have, how many online, etc. So PM2 is a little bit nicer. Okay. So there are two more ways how you can launch external processes from Node.js. For example, maybe you want to build a system where Node.js is just doing input-output operations, but then the heavy lifting, the CPU bound tasks their outsourced to different platform or language, maybe Python or Java. You can do it with spawn. Spawn, you can handle large data, it's for streams. Fork, it's only for Node.js processes. You've seen how it's working. It's for something small, maybe like a Linux command to get the memory, to get some stats because it has a buffer. So it's not using streams, so you don't want to hit the buffer limitation. I have some examples of how to use spawn. So basically, you provide the name of the command, then provide the arguments. And then again, you listen for that data. So on data, again, event emitters. Fork is very similar, only we don't specify that it's Node because we assume it's Node. Okay. So Fork is for Node, it's a narrow case of spawn. Exec, and all three of them, they come from child underscore process. I don't know why they named it with underscore. Most of the modules in Node, they named with dash. But this one of the inconsistencies that we have. Exec has different interface. We don't use events. It's just a callback. Just a callback because we don't have streams. Okay. So one of the other objections of why Node to use Node because it's hard to think about asynchronous code. Our brains, they're not really evolved to think about processes running in parallel concurrently. We're much better suited to think about synchronous code. And then on top of that, it's also hard to debug errors in synchronous code because we lose the context. So for example, remember that set timeout. So when set timeout is executed in the callback, which was scheduled in the future by the previous process, we lose that context. Let's take a look at this example. So try catch, right? We know what it's doing. It's allowing us to catch errors. So this code will work absolutely beautifully, but it's synchronous code. There is no callbacks. There is no events. If we have callbacks and events, so all I did just put set timeout here and I'm throwing error in the future. This app will crash. So try catch is totally useless. This code will miserably fail because the error happens in the future in the callback and we lose that context. Okay? This is how I feel when asynchronous error happens and no one handled it. So how to deal with it? Here's the list. Checklist. Let's go through them. So first of all, again, eventimeter, right? Most of the modules, virtually all of the modules, especially core modules, they would trigger error. So you want to listen for that. So you as a developer, when you're consuming those modules, implement this event handler. And at least have trace, save that error message somewhere, right? Log it. You can also use chain methods, just a different syntax, same idea. And here I'm exiting the process. So I'm exiting it with a code one, which means error. Again, different approach, same idea. So request, I am sending a request. That request can have two types of errors. When I'm making the request, so I need to listen to the event errors, and then when the request is happening, then this would have an error as well inside of the callback. This is like the mother of all the error events. Basically, when nothing else works, have this listener on your process. So this is like the last chance, the very last chance to do something useful, like logging your error, tracing it, creating a notification, sending an email to your webmaster, to your dev apps. So process that on uncut exception, always have it. Now there is this cool thing, it's called domain. It has absolutely nothing to do with domains like the.gov.com, like the URL domains. I have no idea why they named it. And they have a label, if you go to the docs, that it's softly deprecated. It's very confusing, but basically it means that they might remove it from the core in the future. But right now it's in the core, and even when they remove it, it's still relatively popular with users. It's going to live as an NPM module. So I don't see any reasons why not to use it. And this is the idea. So we're creating in the main. And our falling code, our error-prone code, it goes inside of the run. So basically run is wrapping our bad code. And then we're not scared of asynchronous errors. So this code will work. I can prove it to you. If I go here. So this is async error. It failed. And this is the domain async. It didn't fail. It gave me a nice error message, which I programmed there. Okay. Now the last thing you can do, it's number six. It's a bonus. And it's really cool for some of you IoT and hardware geeks. You can build a C++ add-on and create a JavaScript interface on top of that. Node.js works very beautifully with C++. So create your C++ code. This is just a boilerplate code. I just copied it in portals of those modules. And then what I'm doing, I'm writing the string capital one. And then I'm exporting the method hello. So the hello will be available in my node code when I import this module. Then I need to create this file binding.jip, basically it has the name of my module, my C++ add-on, and my source file. Then I need this tool, Node.jip, which will compile everything for me. And all I need to do is just execute those two commands. That's it. It's done. It's working. I will have this folder build-dry slash releases. Now in my code, I'm importing that using require, remember require. It's a global property, global method. And that's it. I will have my string. So this string is coming from C++. Okay. So if you're interested in working with Node.js, capital one is hiring in UK, US, and Canada, shameless plug. And we're also using cutting edge open source technologies like React, Kotlin, Clojure, Angular 2, et cetera. Some of the people there working in heavily regulated industries like healthcare, defense, finance, or capital one is finance. So if you're interested in how we solve the problem of being in a heavily regulated industry and working with open source, this is a talk I did at Node.interactive. So 30-second summary. Event emitters everywhere in the core. It's good to know them. Streams very, very powerful. They basically will make you like a guru or ninja, a pro in your team if you know how to use them. Buffers need to convert them. Clusters, they allow you to scale. And you can build C++ add-ons. I almost want to join them. It's so much fun there. Slides. And my contacts. Write me an email. For further learning, check out my blog at NodeProgram.com. And one last thing. This is a quote from the same blogger, codinghorror.com. JavaScript. It's really, really becoming popular. Thank you.
This talk will give a sneak peak of the most interesting and powerful Node.js features. Node.js is quickly capturing the programming world not just in web, but in IoT, drones, robots and embedded systems. Do you use Node tool such as Grunt, Gulp or Webpack to built your front-end assets. Do you use an HTTP server built with Express? Do you generate code using Yeoman? But do you really know Node? If you are a geek like most of us, then you’d appreciate this presentation. You’ll become more confident in the internals of Node.js and understand how certain things work. Node.js is fast and scalable web-oriented non-blocking I/O built on top of Google Chrome V8 engine. Almost every web developer uses Node or Node-based tools to some extent. However, Node has some really powerful features worth knowing. This talk dives deep into the core mechanisms of the Node.js platform and some of its most interesting features such as: Event Loop: Brush-up on the core concept which enables the non-blocking I/O Streams and buffers: Effective way to work with data Process and global: How to access more info Event emitters: Crash course in the event-based pattern Clusters: Fork processes like a pro AsyncWrap, Domain and uncaughtException: Handling async errors C++ addons: Contributing to the core and writing your own C++ addons
10.5446/51693 (DOI)
Welcome. I understand fully that it was a party last night. So thank you for being here so early at nine o'clock. There is not much people. I think that most of them are still in bed sleeping. So let me start with a short introduction. I have two laptops here. And that's why I came here this morning at eight o'clock. Started down my laptop. And now it says this. Updating Windows. So all the demo preparations that I did. Nice presentation. Nice Spotify playlist. Nice demos. It's going to be a little bit different now, I think. So I have the laptop of my colleague. My other laptop is at 88%. So I have good hopes that during this session it will come up again. I have some demos that I can only do on this one because I prepared it here. But most of the stuff that I'm going to show you is in the cloud. It's on Visual Studio Team Services. That's a cloud service. And it's on Azure. It is also a cloud service. So I think I can manage a whole lot to show you today. So that's what I'm going to do. This PowerPoint online, that's my only slide. So I will close this down. And then I will just start because I thought it was nice to have a demo-only session. So first I want to introduce myself. My name is Sriné van Osnabrige. I'm a lead consultant at XperiaT in the Netherlands. We are a firm that consults in ALM, in Azure, in Cloud and mobile. And I help companies in proving their software processes. So I really look broadly to people, as in tools. And my expertise is Visual Studio Team Services and Team Foundation Server. I'm an ALM MVP as well. And this photo was taken at Build. So I also have the HoloLens part in my session. There is also that HoloLens session next to me. So all the other people that are not in bed are probably there. So this was a great experience that I had there. And I can really recommend everybody to just, if you have the chance, just try it out because it's really, really awesome. Then in private I have a wife and two kids. My oldest is five and my youngest is two. Yesterday, two days ago, he was two. And that's it. I like to run sometimes. So without further ado, I want to just start with the demos. And then let me start my timer. Otherwise, I will just hopelessly run out of time. So first question, who works with VSTS already? Okay. Okay. And who works with the build and release management system already? Oh, okay. And then I hope that I can tell you some new stuff. So this is the intro page. So this is the dashboard. You can create all kinds of widgets and you can create your own stuff and you can put everything on there. So it's more a dashboard for entry point of VSTS. And there is also the work hub where we did what, which we can use to do some work on the management. So to walk you through what I'm, what we are going to do today, I created a small backlog. I already did the preparation, but not properly because my laptop crashed and was updating. I already did the introduction and I'm going to do the agenda. So what I'm going to do today, I'm not going to show you all the build steps that are in the box. I'm not going to do, okay, this build step can do that and this build step can do that. I want to really dive into what is build and release management, what are the conceptual things behind it? How do the agents work? How does the infrastructure work? How does the cross-platform build agents work, which only run on that machine? So I can show you the stuff that is really below the surface because all the tasks, it's really situational and it depends on the situation that you're in, what and how you are going to use it. So that's what we are going to do. First, then we are going to set up a build. We'll talk a little bit about, sorry, I will talk a little bit about setting up tests, updating, et cetera, et cetera, capabilities and demands, triggers, et cetera. Then we walk into VSTS release management, which you can see it's quite similar as the build system. Only you have the approval and validation parts in there as well. And then lastly, if my laptop works again, we can do some customization and extensions and I can show you how that works and I will show you a co-extension. Before I continue, my laptop is responding again. I can log in. So if everything works here, I can just start. So I can start with a simple demo here and then I can maybe move over. So first of all, I move to the build tab. This is where everything happens. So as you know or might not know, Microsoft had first exam build engine. Who used the exam build engine? Who liked it? So I think that the main reason why people use Jenkins and TeamCity was because of the exam build engine. It really was bad. It was, you could do stuff, but if you wanted to do something a little bit different than what was out of the box provided, you came into customization and that was really, really hard. You had to build assemblies and you had to put them in source control and you had to configure your controllers and it was really, really hard to do. So Microsoft understood that and they said, okay, we are going to build a new build engine. We are going to build an engine that is easily customizable, that can easily be extended and that is very, very lightweight to run. And that's what they did. And I think, and I use Jenkins at my assignment right now and I think it's better. I think it's really, really good. It's really fast and it's really handy in use. So let me just start with a simple build definition. When I press create new build definition, I can select templates. And what is a template? It's just a blueprint of something that I already did or what is provided out of the box by Microsoft. And as you can see here, it's not only a Microsoft based templates. You can see Android. You can see and, you can see Gradle. You can see Maven. So it's not only Microsoft related stuff that you can do. You can also do non-Microsoft related stuff. This is really great. So you have also deployment templates, which is nothing more than a build template with some build steps in there that can do deployment stuff. And I can have custom templates. So you can create your own template. For example, for your own components that has certain steps that you always want to follow or certain build numbers that has a certain format. And you can just save those templates and you can just create them from here. And then I can just say next. And then I will have this template as a base. Then I can select a repository. The repository is not only VSTS. You can select a VSTS repository and then I can select both. I can select a TFVC, so Team Foundation Virtual Control repository that resides in the same project or I can select a Git repository. What else I can do is I can move to GitHub. I can create a service endpoint to GitHub and I can extract my sources directly from GitHub and use the build engine of Microsoft in VSTS to just build my sources from GitHub directly. There is also the remote Git repository, which is just somewhere and subversion. Who uses subversion? So I don't know why it's there. I think that just good. So I select a repository. I select the default branch that I'm building and I can select the agent which I'm running on. So I come back to agents a little bit later, but this is now the hosted agent. So Microsoft has a pool of agents somewhere in the cloud that you can just use. So if you don't have fancy stuff to do and you don't want to set up infrastructure, you can just point to it, to the hosted pool and Microsoft provides the agents for you. So I can create. And then first thing I need to do is I can create a build step. A build step is a simple lightweight zip file. You should see it as a zip file which creates which has a manifest which describes the task. So you can see the name and the title and the input parameters and stuff. And it contains a PowerShell script or Node.js script, which is executed when you execute the build step. Microsoft is moving to Node and to.NET Core because that's cross platform. If you use PowerShell, it can only run on Windows based machines, of course. And if you use Node or you use.NET Core, you can use the same build tasks on multiple platforms. So Microsoft is really moving all their tasks to the new infrastructure. But most of them now are still in PowerShell and you can see the variants there. So let me just start by adding a simple PowerShell. And I can hit close. So I add a PowerShell script. I can choose whether I want to use inline or file path. And then I can just write some PowerShell script. So let me just be simple and easy. And I can just say hello, NDC. Save the template. I can give it the name. So let me just say first look build, for example. Okay. And then I just queue the build. So really simple. If I can, if I selected the template with contain more steps, it was somewhat more configuration. But I can just now hit enter. And now it will just start up my build. It will look in the hosted pool for an available agent. And what you see sometimes is if it's a little bit more busy, it takes a few seconds to just get an agent that is available to run my build. And then it's running. So what you see here is hello, NDC. So that's the output of my PowerShell script. Nothing fancy here. So let's dive a little bit further. For example, if I add another build step. And a PowerShell script. Make it inline again. And I just say, for example, I want to do a curl statement. So I want to do curl, HTTP, Google.com. And I will save this. If I queue the build now, it will, of course, run my first PowerShell script again. But what we'll see is that it will probably not run on the hosted agent. It will crash. And this is what I mean by it takes a while sometimes to connect to the hosted agent. So you will see it crashes. Why? Because on the hosted agent, curl is not installed. So the hosted agent does not contain everything you want. It contains the basic stuff. It contains the.NET frameworks and everything that you can expect from a Windows agent or a Linux agent. But it doesn't contain the tools that you need sometimes. So you can do two things. You can either provide the tools that you need by putting the XE files that you need to run curl, for example, and just copy it to the agent. And then execute that XE to do your thing on the hosted agent. Or you can run a on-premise agent. And that's what we are going to do. And my laptop works. So I'm going to switch machines. And then I'm going to come to show you on this laptop. So give me a sec. Okay. So I opened up the build that we just created. And what I want to do now, I want to switch to my on-premise agent. So for that, I'm going to the general tab. And I go to the default agent queue. And I press manage. Okay. I need to maybe connect. Yeah. So here we are. So in my management, I have queues. I have a default queue. I have multiple queues. And I have this hosted queue. So the hosted agent is always online. If this one is red, something is wrong at Microsoft. So we cannot do anything about it. This is always green. So we have the default pool. And we can create all kinds of agents in here. So what do we do? We can download an agent. If I press this one, and I won't do it for the time that I lost, it will download a zip file for me. And that zip file contains some files. You unblock the file. Very, very important to unblock the file. And then you extract the file to your local disk. So when I go to my command line, I have a NDC agent. So this is what it kind of looks like. So you have the agent directory in configure and run. That is what you initially get when you extract the file. Then you are going to configure the agent. So what that does, it wants to know to which collection am I connecting. How is this agent called, et cetera, et cetera. So it will now hopefully start connecting. Maybe I should switch back to the other machine. I'll show you. Okay, let's try something different. I will just try to download the agent again. So I will download the agent. I think the network connection is a little bit slow. This is the agent that I downloaded. So I will just copy this one. Create a new folder, NDCA. Extract it here. And let's try to do this one. I don't know why this is not working. Ah, okay. So I have to configure the agent. So I can say the name of the agent, the name of my server that I'm connecting to,.com, the pool to which I'm connecting. So you can create pools of agents, and you can share those pools amongst different collections that you have. In VSTS, you only have one collection, and one collection can have a pool, and you can attach a queue to a pool. If you have multiple collections, you can create another queue on another collection and connect it to the same pool. So that's how you can share your built infrastructure in your organization. If you used the XAML builds before, you had a build controller and a build controller ran on a machine, and it connected to one single collection. If you had another collection that you also wanted to use for your builds, you had to create a new machine with a new build controller. So it was not really scalable to use your build infrastructure. So instead, we can now share built queues. So I'm connecting to a build queue. Where do I want to get my sources? And I can say, where do you want to run this service as a Windows service, or do you just want to keep running in your console? What I usually do is I don't use the hosted agent much. Most of the time, just run in my console on my local machine. So when I'm doing building builds, creating builds, I just run a console and testing the builds because then it gets all the sources that I'm doing on my local machine, and I can easily debug. I can easily do stuff on the sources that it gets. I know what happens exactly because it's on my machine. And once it's running, I just will run on the build agent somewhere in our build infrastructure. So that's a really convenient way to just easily start up a new build agent or multiple build agents at the same time. So now I will just connect to my thing here. So I will sign into my account. So this build agent without firewall stuff, it connects to the default ports. It will connect to my VSTS account, and it will just start running. So here it's running, and in my... I had to start up somewhere here as well. That's all the preparations that I missed. So here you can see that I have the RVO Dell 2. Just the agent that I just started up, it's now running on this machine as well. I will just... Can I just remove this one? No. So when I go back to this build, and if you still remember because of all the do, we did a curl statement on this build. When I queue this build, I can say I want to run on the default queue. So not on the queue in the cloud, but on the default queue, and I can run. VSTS starts the build. It puts it in a queue. The queue finds an available agent, and then it starts to run the build. And you can see that it finds the RVO Dell 2 agent because that's now the only agent that is available because the other agent on that machine that I used before, I disabled that. Now I finally find one agent, so it will run there. So it will get sources, and then it will execute the PowerShell script twice, hopefully. Okay. Not recognized. So probably I did something wrong in the... OK, OK. Thanks. Yeah. So, but when I do this, it will run and it will show me the output of the crawl statement. That's not very interesting. But what is interesting is, so let me change this back so that it is default on the default queue and let me then quickly change this as well for the next run. What is interesting is that, obviously, when I run a build, sometimes it happens that my agents are not capable of running that build because they don't have the tools that I need to run that build. So how can I manage that? How can I make that controllable? We can do that by adding demands. So on this general tab, I also have demands. This demand means that I can say, I want to be something present on the build server when I run this specific build definition. I can say, for example, I want curl to be present. Curl must exist on this build agent. I can save. This is a manual step, so I can add anything that I want. So if I can sort of like tag my agents and tag my builds, so this build is a update to build and this build is a legacy build. So I can do all the demands. But there are also tasks that you can add. For example, there is a task gradle that I can add. And this gradle task, it automatically adds to the month Java. So if, oh, okay, sorry, if I want to run a gradle, it demands that Java is installed on the machine that I'm running on. So the task is configured to add that demand automatically. So let me just quickly remove that. When I run the build now, it tells me there is no agent that is capable of running this build. It demands curl, but there is no agent with curl. So sorry, you can queue it and wait forever or you can just fix it. So what we are going to do, of course, is trying to fix that. If I move to my agent that I just created, there is a tab, capabilities. So demands and capabilities, they are a pair. I demand something and my agent should have that capability to be able to run that. So what you can see here is that there are system capabilities. So Chef, DNX, Java, that's all system environment variables on my system. These are the tools that I have installed on my systems and it just reads the environment variables and it knows that Java is installed. If I need something extra, I can just say curl. And for example, one or true, I don't care because I just checked that curl exists. So if I save the changes here and I run this build again, then it will automatically queue and it will find an agent and it will only run on agents that have that curl capability configured on true. So it will now hopefully run and even run without errors because I just fixed it. So here is the curl output. So this is all the queue and the pool and how can you manage your build agents with demands and capabilities. So this is how you can steer your builds in your build infrastructure. This is not cross-platform yet. So what I want to show you now is I want to do the same thing for the build but run on a Linux-based agent. So when we create a new build definition and I will just do an empty build definition again, I will just select my NDC project, create, and I add a build step. But I'm not adding a PowerShell script but I'm adding a shell plus plus script. So this is the Linux-based thing to run PowerShell, for example, or command line. So I can just say echo as well NDC and if we look at the demands, it demands SH. It demands shell to be present on the agent that I'm running on. So if I press save now and I would just say Linux, save and I queued this build, I don't have an agent. I don't have a Linux agent running yet. So we are going to fix that. So I have a Linux machine running in the cloud and I need to quickly look up my password here. So this is a Linux machine. And what I can do is I can, of course, download the agent on this Linux machine, which I did. So there is a build agent here. But on Linux, we can do other stuff as well. We can also say we are going to run Docker because Docker is a very, very lightweight virtual machine-like thing. Who knows Docker, the concepts? Yeah? Okay. So instead of installing the agent on the machine, because the machine has all the environment variables, I have a specific version of Java and I have a specific version of whatever, node. Then I can only have agents that run on that specific version of software. While when I create virtual machines, I can have many virtual machines with agents installed and run different versions of software on that. So I can sometimes run with Java 7 and sometimes run with Java 8 or whatever. But if I do it in Docker, it's all the benefits of the virtual machine, but still smaller and very easy to roll out. So what I did is I created a Docker image that's called xperia.vsts build agent that has node installed. It's just a Docker container with a build agent configured and installed already, but it's not running yet. So if I'm going to run this container by saying, Docker run, minty i xperia slash fsts build agent bash, it will start up the container and you can see how quickly it is because it's already done. And then I can see the agent directory here. So I will go into there. And then I will come, I can say, switch user because I cannot run this service sa root user. That's not allowed on Linux. So I have to switch to another user. And then I can just say, run. It asks me for my username to connect. I already configured this agent to look at my collection. I run and you can see it's just easy starting up this container. And I can also do it by Docker run that it automatically starts up the agent already because then you don't have to configure it manually, but this just for demo purposes. So when I look now, I have a Docker agent running as well. So when I queue this build now, it will go to the same queue and finds me an agent that has SH installed. And that is, in this case, my Docker agent. And then it will just automatically pop up my NDC message here as well. So you can imagine, and I won't show you now because I want to save a little bit time, that if I just exit this container and say, okay, I now want to run another container with a Java version 8 installed and I'm doing a Java build which demands me a Java installation. I can just easily start up a new container. For example, the VSTS build agent Java, it automatically pops up and the whole infrastructure makes sure that it will run on the container that is capable of running that specific build. So I did two blog posts on build agents in Docker containers, both on Linux and Windows because Windows is also now capable of running Docker containers in a preview mode. So you can look at road2alm.com and there are two blog posts there that describe how to do this. It's really convenient way to just quickly start up new agents and do some configuration there. So that's really great. So leaving my Linux build for what it is because after all, I'm still a Windows guy. Let us walk to the rest of this build possibilities. So I opened up the wrong one. So let me open up this one. So let me walk you through a template. So if I choose the template Visual Studio build, it will provide me this template. So it will do some Nuke at restore. It will do a Visual Studio build. It will run some unit tests that it can find. It will publish symbols. And then I added two tasks, copy files, which is more or less create a nice structure for your build drops. So you can publish artifacts to your build. But if you do nothing, it will just publish everything that's in the binary directories and you have everything. But if you want to have nice folders, for example, I want to folder with my website and I want to have a folder with my tests and I want to have a folder with my other components, you can use the copy files tasks to just copy everything to a sub directory. And then at the end, you can just publish the whole lot as a build artifact to the build. So that's what I did. If I am going to look at the solution, then what you will see is that we have values in the MS build arguments. And for me, this is was that there was something to get used to. But because normally when you use the XAML builds, you just said, OK, I want to build this solution. And then you had some checkboxes like I want to do code analysis. And I want to do I want to create an MS deploy file. And it was just checkboxes and then you could run and then you had it. The new build agent, it is very lightweight and it does not really things for you. There are steps that can be created that does the stuff. But it's doing nothing automatically. So if we want to do visual studio build, and in my case, I'm building a website and I want to provide, I want to, I want to copy a zip file. I want to create an MS deploy zip file that contains my website. I have to do some MS build magic. So you have to know MS build a little bit more than you maybe did before when using the other stuff. You have to know what's happening inside MS build, what MS build needs to be able to do the stuff. So in this case, I want to do a web deployment. So I have to add the parameters deploy on build, web publish method, package as a single file. And I also have this one run code analysis. First it was just a checkbox run the code analysis and then underwater it's called MS build with this parameter. If you don't now want to run code analysis, you just have to specify this parameter and then it will run it automatically for you. And then you have the package location and this one is maybe a special value. This is the build.staging directory. So the dollar bracket open value. What's that? That's a variable that you can use within the build pipeline. So let us move to the variables tab. If I look at the variables, first start with the predefined variables. There are lots of them that you can just use in your build pipeline. So for example, the build directory, the home directory, what's the name of the agent that I'm running on? What's the name of the, what's the path of my artifact location? What's the path of my source location? What's the path of my binary location, et cetera, et cetera. All those variables are present and you can reference those variables in all the steps by just putting dollar sign, bracket open, name of the variable, bracket close. When running PowerShell, it just injects them as environment variables and you can refer to them by column and then the name of the variable. So that's really useful. You have all those variables always at your servers. So you can also, of course, create your own variables. So just add a new one, give it a name and give it a value. For example, password. Let me do that. If I want to do a password, I can just type it in, but I want to encrypt this. So there is also this almost hidden path lock that you can press and then it will just hide the value from the screen. That's great at first, but it will also encrypt the value in the database in VSDS and it will not show in the output locks of your build. So then you're safe. If I now want to change this, if I save this and I want to change it back, it's never visible again. So I always have to fill it in again. So that is great. So I will just put this back. Then walking through the other stuff, I can run multiple configurations at the same time. For example, I want to do a release and a debug for different platforms. So I can add multipliers here. For example, do the build configuration in debug and release or any CPU on debug and release, X64 on debug and release, so you can add multipliers by using this format. So build platform, build configuration, and then it will just do all those builds together. You need multiple agents to do that, of course, because an agent can only run one build at the same time. So if you want to run in parallel, and that's an option that you can specify here, if you want to run in parallel, you need multiple agents available to run in, of course. Then there is create work item on failure. I think it's obvious. It will just create a bug when the build fills. And this one is very interesting. I also wrote a blog post on that. I wrote to ALM. This one is giving me an access token. So if I check this box, it will create automatically underwater a variable that's called system.accesstoken. And that access token can be used to access the rest APIs of VSTS. So if I want to do, for example, give me all the commits that are attached to the build I'm running now, because I want to do extra stuff on them, you can call the rest APIs of VSTS. But in order to be able to call the rest APIs of VSTS, you need to authenticate against the rest APIs. And because the build agents are just running on a machine and you don't know what, you need to log into that rest API. So if you press this button, it will provide me a token that has all the authentication details and I can just access the rest APIs by putting that token in the headers of my request and I can just get everything from the rest APIs. So that's very convenient. Otherwise, you have to write your own authentication stuffy inside your PowerShell and now you can just use this. So this is really, really useful. And you should, if you want to use it, look into that. And the repository, so that's nothing special again. So we can just change the repository where we are getting the sources from. And we have, of course, the triggers, the continuous integration trigger. It's just a checkbox. So every time when I check in into the repository that is attached to this build, so in my case, the Git repository, and it listens to the branch. So if something changes on this branch, this build will trigger and it will automatically run. And you can, of course, have scheduled builds at the same time. And that's something that is also new in opposite to the XAML builds. The XAML builds could only do one thing at the same time. So you could either do a continuous integration build and then you had to copy the build and do a scheduled build. And if you want to run two times a day, you had to copy the build again and then you can do another schedule. Now that is all different because we can just do everything at the same time and we can add multiple schedules if we like. So this is really handy to do just your build configuration. So let me just switch this off. What more about the general tab? Nothing more that you can change the build format, the build number format, and the retention policies that you have. So how long do I want to keep those builds? And nothing. That is very, very convenient, history, audit logs on the build definition. So I can perfectly see who changes, who changed the build definitions and why. So that's also very nice. So when I run this build, and I'm not going to do it because my time is getting really, is going really, really fast. But I also prepared something like this. When I run this build, and this one is a little bit more verbose maybe, so you can see all those things. So I run unit tests. So I can see my test results directly in my build report. I can see the code coverage. So that's also directly in the build report for the unit test that I'm running. I can add tags. So this is a really convenient way to set, for example, your build quality. This one, this build is ready for test, or this one, this build is ready for release, or whatever. And you can see the associated changes. So this build is, this commit is attached to this build. And if there are work items attached to that commit, they also show up here in the associated work items. What I really like about when a build, for example, failed is that you can also see the failed tests, and then the detailed report of the tests, they really, really show you nice information. So for example, this test, the validate shopping cart, it failed. When I click it, I have my error message directly in the screen. I can see the stack trace. So this is really rich information on this build definitions that you can also later on pin on a dashboard or whatever you want to do with it. So this is, yeah, the new build that is going, it's still expanding. So they are really now building the widgets for the release, the widgets for the test, so you can have a very, very rich build report. So that's great. So let's move on to release. The release is actually not very exciting after I told you all this stuff about build, because it runs on the exact same infrastructure. It has two separate teams, and there are differences, because that's how it works at Microsoft. They talk to each other, but they also have their own implementation of stuff. So there are tiny differences. For example, the rename, I have to do the rename here, and on the build, I have to right-click it and rename. So that's all the, that's the tiny stuff that, yeah, it frustrates me, but okay, it still works. It's not a tiny thing, but the big picture, it's all the same. So we have the new release definition, and we have the tasks here. And when you create a task, you can say it's available in build, or it's available in release, or in both. And I think that 95% of the tasks that are in the store right now are available on both. So if you look at this task, it's all the same. So it's, so you can just do the PowerShell, you can do deployments, you can run whatever you want, copy files, doing Azure stuff, creating resource groups on Azure, it's all there. So this one, for example, the Azure Resource Group Deployment, it's really powerful task. You can just edit, and then you can say, okay, I want to point to a subscription that you can manage by creating an endpoint. And then you can just point to a resource group and say, start all the virtual machines or stop all the virtual machines, or create me a new resource group based on this template. And it's just there, and it works on the water with the PowerShell stuff. An extra dimension that we have within my release pipeline is the environment. And I think personally that environment is a wrong term that they use inside the release definition. Because for me, environment implies that it's a bunch of machines that it's running on. Maybe it is, but actually it's more like a stage. It's something that I do in a specific stage of my pipeline. So in this case, when I go to the development stage, I want to deploy to this environment, and this environment is multiple servers. But when I've done that, I want to maybe move to the next stage where I do some copy or files, and creating a zip file, and maybe sending out some email or whatever. And that's not an environment. It's just a stage that I'm going through in my pipeline. So environment is something, is a group of tasks that you want to execute. And you can put validation and approval on that specific stage. So keep that in mind that it's not related to machines. It's more like the phase that you are going through. So I created four environments, development environment, QA, staging, and pre-production. And every environment has its own pipeline of steps. And luckily for us, you can clone those environments. So if I have similar ways of doing stuff on the dev, it's doing exactly the same thing as on acceptance. Only there are different values of the parameters. I can just clone the environments, and I have it directly. So that's all there. So in my dev environment, I just now do a website deployment. It's not really interesting because it takes the website that I created in my build, and it deploys that to a web application on Azure. How does it know to which build definition it should respond? When I create a new one, it asks me, but if I already created one, I can go to the artifacts step. And the artifacts are something that you really should keep in mind. Microsoft really split it, the responsibilities of build and release, where maybe Jenkins who uses Jenkins. Okay, so there is no relation there. But I think it's a nice story to know because Jenkins is a task runner. It's a build system, or it's a task runner. And in Jenkins, you can do everything. You can just schedule a job, and it just runs the job. And sometimes it builds, and sometimes it deploys, and sometimes it does something else. It doesn't really matter. Here in VSTS, they really made the responsibility of, okay, we have a build, and the build outputs an artifact, a package, a zip file, or an MSI. And the release takes that artifact and does do something with that. So deploy it to an environment or copy it to some machine or whatever. And that's something that you should really know. So it's not, you can do it probably, but you always need to build to have the artifacts, and then the release pipeline takes care of the rest. So in my case, I linked the release build to this release pipeline. And I can link multiple builds. So if I have 20 components, then I have maybe 20 builds. They all output an MSI. And then I have one release pipeline, and I attach all those MSI as an artifact to my release, and then I execute my deployments. So I attached this release build. So in my task, it asks me for the web deploy package. When I browse for this web deploy package, it shows me the linked artifacts. So this is something that I delivered inside my build. These are the artifacts of my build that I published, and I can point towards them. So this is just selected. There are other artifact stores, so I can point to a build. I can point to a Git repository. I can even point to Jenkins. So if I have a Jenkins server that is publicly available, so where I can get to, I can create a service endpoint to that Jenkins server, and I can get my artifacts from there. So then I can use release management without the builds in VSTS. So that is also very great if you're in transitioning, because you eventually are going to move over to this one. And then we have the variables, same as in the build. You can refer them as the same as the build. There are some other predefined variables that you can use, the release dot artifact directory and such thing. There is one catch here, a really, really bad UI. This release variables, it's actually a link. If I can click it, I can just switch to environment, and then I can see all my variables per environment. So this is also nice. Triggers, I can create a trigger manual. I can create a release manually, so I can start the release. I can also do continuous deployment, when my build finishes that is attached to this release, it automatically starts this release. So this is continuous deployment, and I can do scheduled releases as well. So I want to do the release every night at three o'clock, for example. And this one is also very interesting. This is quite new. These are environment triggers. So when are we going to trigger this release? Or this environment? So this, this environment, this one will trigger after release creation. So my deployment condition is after release creation. So when I press new release, it will start automatically executing the development environment. When I go to the, for example, QA environment, this one will trigger after another environment has finished, or it will trigger manually, for example. So this way, I can also do parallel stuff, because my QA environment says, okay, I want to trigger when the dev environment is ready. But on my staging environment, I also say, when dev environment is ready, I want to trigger this. So this one, this way you can just parallel deploy to multiple environments. And then you can come back, because of my production environment. There is say, it says, okay, you have to wait until QA and staging are ready, and then published to my production environment. So you can really create a nice flow of how your application moves through your infrastructure. So that is very nice to see. And on the general tab, we have just some simple stuff on the how does the release number looks like. And we also have a history trail here. Creating a release is quite easy. You can just create the release. You point to the build that you want to release, in this case. And then you just start running the release by create, and then it will automatically run. There is one thing that I missed, and that's, I think, the main advantage of using a release pipeline like this. We can add validation. So what we can do is we can assign approvers to a specific stage. So whereas a build just runs when you trigger it, in the release pipeline, it's different. We can say this stage must be approved by someone, by me. So before you do something, I want to set an email to this guy, and he has to approve that it's possible. So the first stage is most of the time not an approved stage. But the second stage, going to test, sometimes a product owner wants to say, OK, I want to do it. Or the testers say, OK, I want to do it now. And they can just approve the build. And you can approve pre-approved and post-approved. So when it's there, you can also say, OK, it's OK now. So in this case, we have approval. So when I start this release, and I point to this build, and I say create, it will not start automatically. But it has this little guy. And here it says, OK, you have to approve this specific release. And now it's waiting until the approval comes. So I will not run it, because I want to show you the last demo of the extensions. And I think that's cooler than looking at the output of a release that's just executing some scripts. So going back to build, because I showed you the builds, and I showed you how to add build steps. But I didn't show you that you have more build steps available than what's here. If you close this one out and you go to the shopping basket, you can browse the marketplace. The marketplace already contains a lot of build tasks that you can just click and install. If you do it, it will be available in your toolbox, and you can just add them to your pipeline. And there are already great extensions available, and they are really pushing the community to build them. So it will grow and grow. And you can also build them yourself. And that's what I quickly wanted to show you now. So what I did is I created a small build task. So this is my Git repo for that. And as you can see, it's very, very easy to do it, because there is only a VSS extension manifest, which is just a description of this task. What's the version? What's the description? How does my readme page look like? What are the background colors in the marketplace? Well, all that kind of configuration. Some licensing and overview, that's some documentation, some images, also not really relevant. We have a task manifest, which describes the task, what is the version of the task, and which PowerShell does it execute, on which agent should it run the minimum agent version, the demands. So if I need a curl or if I need Java, I can add it to my task manifest so that I can demand it from my build pipeline. And you can see that it writes the message here. And then we have, of course, the PowerShell script itself, which just simply, in this case, writes a message. So that's it. Then we have a tfx command line, which you can use. And with the tfx command line, you can package this directory into a VSIX. And at VSIX, you can upload to the marketplace where you have to be a publisher. So for that, I can show you the marketplace because I need, I have some issues with my account. I need another browser for it. Don't ask. It's Microsoft Identities. That's really failing me sometimes. So I have to click the work account. This is all the stuff that I already prepared on my laptop. But I don't have to sign in and stuff. So this is the marketplace. I have created one task that's available in the marketplace already. So that's sending email. And you can see that there is only one task. So this is publicly available. The other task that I created is really private available. Instead of running the command line, what I did is I created a build and used a task from a colleague of mine that just handles all the publishing stuff. So it replaces some variables. It replaces my version. It replaces the publisher. It replaces the public private availability. It takes care of all kinds of stuff. So I used that task. And it just says, okay, take this Git repo, package it into a VSIX and upload it to the marketplace. So when I run this build, it will hopefully do that. Now it's running on the Docker agent. That's not nice. So let's try that again. So it's now running on my machine because it needs TFX. And it has some demands that are not handled correctly, probably. So that's feedback that I'm going to give him. Take care of your demands. So it's now publishing to the marketplace. Setting the version. Okay. So when I refresh this, I have this write message. Task available in my marketplace. And it's shared with one account. In my case, my own account. So moving to a build that I already prepared for this, the write message demo. I can now go to manage extensions. And then this write message task that I created with this nice NDC Oslo logo. I can click it and I can install it into my account. So I will just hit and continue. So it's now installing. And then it says proceed to account. So it might be possible that I have to restart my browser because there is sometimes a little lag in the tasks. But hopefully it will do. So if I now edit my build definition, okay, I already added it so it was disappeared because I removed it. But in my toolbox on utility, there is this write message task that I created that's only available on this account because it's private and shared with this account. And I can share with multiple accounts if I like. So during development, it's nice to have that for your own organization or only keep it for your own organization because you don't have to make it public. Then it's only then it's for everybody. And sometimes you have specific stuff that you want to run for your organization. So I created this message and I already put in a variable for that message. And then I can queue the build. That's the last step of my presentation. So before I do that, I want to tell you that I'm available today on the conference. So if you have any questions, if you want to see more, just find me and I can show you everything that you like. I hope that you enjoyed the session. Excuse me for that. It was a little bit of noisy in the beginning, but you can imagine the stress that I had for the laptop stuff. So thanks for your patience. Thanks for being here. And now I'm going to run this build. And hopefully it will just start. And show some nice output that I created for you. So to clone my repository, that's exciting. Find a decent internet connection when you're doing Fiestes. So that's it. Thank you very much. Thank you.
With the release of TFS 2015 the new Build system was introduced. The changes compared to the old XAML build system are spectacular. It is now cross-platform, fast, light weight and task based. The same concepts apply to the new Release Management tooling that is available in Visual Studio Team Services and that will eventually flow in to the on-premise product. In this session we take a close look into the new build ad release system and talk about how you can benefit from them in your daily workflow. Even if you are not a Microsoft Developer!
10.5446/51695 (DOI)
All right, three o'clock, let's get started. Good afternoon, NDC. Before we get started, there are some things that you are not going to see in this talk, and I will tell you what they are right at the beginning, so if that's what you are here for, you can leave with no hard feelings. You will not see any code. You will not see any slides. You will not see any PowerPoint animations. And unfortunately, you will not see Peter. This talk was originally pitched as a... Peter Hinchins did a keynote at Coding Serbia last year, where he talked about the laws of physics as they are applied to software systems. And we were talking about this in a bar in Vilnius, and basically I loved this idea, and we started bouncing things backwards and forwards. It was really, really interesting. I said to him, why don't we do it at NBC Oslo as a double act? He said that sounds like a good idea for reasons those of you who know him will understand. He's not here. So, if you want to see PowerPoint, I believe they have some in the next room. I'm going to talk to you. I am going to talk to you about the laws of physics. There are all of these wonderful laws that people have discovered and refined and proposed and proved over the years. And some of these laws can apply to the software projects and the teams and the communities that we work in every day. I'm assuming most of the people here in this room were all developers of some sort or another. We worked on teams, we got stuff done, we built systems. And there are interesting observations about the way that we interact, because software is about people. And people are subject to the same physical laws that govern the universe we live in, just like everything else. And some of this is going to be fairly entertaining and high level, and some of it is actually going to be fairly specific. And we're going to do it in chronological order. Starting with Isaac Newton. Newton was a genius. Newton basically founded the principles of modern physics and mechanics. Newton was so smart that in the 16th century he came up with a set of laws which tell you why distributed software teams will fail. He just didn't know that's what he'd done until four centuries later when we looked it up and went, that's interesting. So Newton was a fascinating guy. He basically had three different careers. First he was a physicist or a natural scientist at the University of Cambridge, and he published the Principia Mathematica, or he basically laid out all these rules about how things work, how planets move, how gravity works, how all these kinds of things happen, how they behave, how to predict their influence. Then he became an alchemist for a while, and we don't talk about that very much, because he wasn't nearly as successful at turning lead into gold as he was at understanding planetary motion. And then he came to London and he ran the Royal Mint and did that for the last chapter of his life. So we're going to talk about Newton's Laws of Motion. So Newton's first law, which was published in Principia Mathematica in 1600 and something, basically says something that is at rest, something that's not moving, will not move. It will remain at rest until a force acts upon it, and something that is already moving will continue moving at a constant speed in a straight line until a force acts upon it. So there's a force that's not moving unless there is a force, a motive, an incentive for something to start happening. This is why when you wake up in the morning you think, I'm not going to move. I am at rest right now, and Newton says I should remain at rest. And then you remember that you have bills to pay, and if you don't get up and get online or go into your job or whatever, then you probably won't get paid. And so there is an economic motive, there is a force acting upon you, which will get you out of bed and get you moving. And the same thing is true of teams and companies. So, you know, organizations, startups will see an opportunity that will think, hey, we were talking in the show last night about the fact that the CEO of Uber doesn't have a driver's license. His driver's license expired, and the joke is that it's so Silicon Valley that when your driver's license expires, instead of getting a new one, you do a startup that will get people to pick you up from your door, because it's more fun and it's easier. So there are two kinds of force here. There's sort of attractive force. There's economic opportunity. There's seeing something and going, there is something here, we could go and do something, we could get rich, we could change the world, we could make this a better place. And then there is, if you like, the push force, the fear, where you're being driven to do something by, sometimes it's by threats from competition, sometimes it's legislative, sometimes it's, you need to do some work because a piece of your stack is like a ticking time bomb and you know that you're going to run out of a dress space or run out of capacity. So you have this idea of forces. Now, you ever worked in a company where they've been doing something happily for 10 years and you turn up and you try and change something and they're just like, no, no, we don't do that. This is not how we work, we do this. This is the same thing. Once they're moving in a straight line, they'll just keep trucking on. And Newton's Law is anyone who remembers studying them in high school, you always had these things in your examinations where it's like, you know, imagine a weight sliding down a smooth frictionless plane in a vacuum, which doesn't happen, you know, unless you're going ski jumping on the moon. But friction is a force and friction applies to real objects in the real world. It's why things slow down and friction applies to organizations as well. Even if you've got a team who are working and that, you know, the goal is focused, you're building a project, you have something to ship. Day by day, the kind of novelty and the initial push, you know, you will have the kickoff meeting where it's really exciting and it's inspiring and it's visionary. And a couple of weeks down the line, you're like, this isn't really quite as much fun as it used to be. And so it slows down because that organizational friction is a force. And that force will slow down the people and the teams that you're depending on to make these things happen. Newton's Second Law. This is the one you probably will remember from school. F equals MA. Force is mass times acceleration. Acceleration is how fast you can change something. You can change something by moving it very, very, very slowly, but to actually change direction. So acceleration is any change in direction. If you're moving, if you stop, that's an acceleration, negative acceleration. If you're standing still and you start moving, that's an acceleration. If you're going this way and you need to go that way, if you're running Mac, you need to switch to Windows. If you need to get everything from Pearl to Python, it's change of direction and that's an acceleration. And the force required to affect that acceleration scales with the size of the thing that you are trying to change. This is why startups can go, oh, hang on, this chat room that we've got, people are quite liking the fact they can share photographs on it. Let's turn our company into a photo sharing thing which has comments and that's how Flickr turned it from a chat service into an online photo album because they're small. They don't have a lot of mass. They don't have a lot of inertia. Startups can pivot really easily. You compare this to big organizations, Microsoft, Oracle, Sun Microsystems. These organizations are big. Start thinking about these in terms of super tankers, massive, massive, heavy structures that do not change direction easily. Now, the thing about F equals MA, you need to think about the time that it is going to take. You can steer a super tanker by very gently pushing for about six weeks and it will very slowly come about. If you want to do a handbrake turn in a super tanker, stuff is going to get broken because you are applying a massive force to affect a big acceleration on something which is massive. If you've ever, I'm sure there are people here who have been in organizations where somebody has used the fact that they are powerful to try and implement this kind of change. You get an email comes down from the CTO, from tomorrow there will be no more GitHub. We are using Team Source Safe for everything. They can do it because they are powerful so they can wield massive force. But trying to wield that force to a large organization to affect that change in a short space of time will break things. If they do this with enough authority behind them, sure, by the end of next week there will be no GitHub users left in the organization, but it's not necessarily because they've all gone and done the thing that you wanted them to do. It will be because they've left. So this is the equivalent, you know, visualize someone handbrake turning a super tanker into an iceberg. It's a very destructive mental picture, but when you are trying to affect large changes to large organizations in a very short space of time, that's effectively what you are trying to do. Newton's third law. For every action there is an equal and opposite reaction. This is the thing you push something, it pushes back. I'm standing here right now. I'm pushing down, Flora is pushing me back up. By happy coincidence they are exactly the same. I mean, that's why I can't fly, but it's also why I'm not sinking into the ground. You get this in organizations as well. You try and change something, there will be resistance. You propose something, people will push back at you. Hey, you know, I think we should stop sending out Word documents and we should do our updates on the Wiki. They'll be like, no, we don't like the Wiki. It's like, the Wiki is great. Everyone can read it. It's online. Anyone can edit it. I don't like anyone editing it. You get this reaction, it pushes back. The other interesting thing about the equal and opposite reaction is if you have one of these massive organizations, lots of inertia, lots of people, lots of money, you can use that as a springboard. Very, very small organizations, if they try and fire off something to go and explore a new opportunity or a new venture, it can actually, it can split the team because suddenly they don't have the bandwidth to cope anymore. But if you are, say, Microsoft, who are steaming along down the software industry with, you know, however many billion dollars in Windows and Office revenue behind you, and you suddenly realize that Sony and Nintendo and Sega are about to start winning the console war and you think it would be a great idea if Microsoft had a console, you can't steer the super tanker. Microsoft, the organization could not have pivoted to focus on games consoles as their primary area of business. But what they could do is leverage the mass that they had to spin off the Xbox division without worrying that that was going to disrupt what they were doing. You know, Xbox may have failed, but I don't think anyone would ever consider that spinning off a small team to go and develop a games console was an actual threat to Microsoft. You know, there's the marketing risk and there's the odd piece of brand fallout. But in terms of the core operation of that organization and the way that it works, the fact that they have that mass and that inertia allows them to spin things off against it at, you know, relatively little risk to what they're doing. So that's Newton's predictions about software teams and organizations written down in Latin in the 16th century. And, you know, here we are today exploring them and thinking about how they apply to what we're doing. There is another thing called the equivalency principle. The equivalency principle is an observation in physics that effectively says it is impossible to tell the difference between gravity and acceleration. You don't know if you're in orbit or you're falling until you hit the ground because you cannot distinguish as an observer, you cannot distinguish between acceleration that is because of gravity and acceleration that is because of a force that's being applied to it. Science fiction tropes love this. You ever read any of those books where they have the spacecraft that accelerates all the way so that there's gravity from the fact that the engines are pushing? And then when they get to the halfway point, the whole ship turns around and it does the rest of the journey in reverse and it's breaking the whole way. And so that acceleration and then breaking provides an artificial gravity for all the people who are on board. So an idea Arthur C. Clarke used a couple of times. Now the interesting thing about the equivalency principle is that it applies to observers within the frame of reference that you're talking about. There is a thing in aviation that's called a graveyard spiral. Now graveyard spiral is when you're flying in cloud, you can't see anything, your instruments are out and you think you are flying straight and level. And what you're actually doing is you are banking in a turn like this, but because the centripetal force from the turn feels just like gravity, you think you are in level flight. And because the gravity is coming from the turn, you're not conscious of the fact that you're falling. Now to somebody outside, it is completely obvious that you are not in level flight, you are doing this. Someone could see you on radar or something, they'd be like, hey, pull up, pull up, pull up, you know, you're in a deep problem situation here. The point about this is that from within our teams and within our organizations, it can be very, very difficult to tell whether we are doing the right things or not, until you hit the ground and realize that you've been falling all of this time. You know, projects where you're delivering code, you're working hard, you're shipping things, and then suddenly you realize that actually this is, you're not going to get out in time. Your version one product is targeting the wrong device. There are issues with your deployment strategy, something happens and splat, you know, you hit the ground, you face plant and things over. So the point about this, the equivalency principle is going to bite you because from inside it's impossible to tell the difference between being in a stable orbit and being in free fall. So you need external observations, you need somebody from outside who is looking in or, you know, observing what you are doing to allow you to tell the difference between these two scenarios. Which means you need to measure things, you need to set up some metrics. And the problem with measuring things is something called the uncertainty principle. So Werner Heisenberg has come over to say London from Germany, he's driving down the motorway, he's in his BMW, and because in Germany the Autobahn has no speed limit, he's, you know, got his foot down, he's going a feral clip and the police pull him over. And they say to him, you know, hello Mr. Heisenberg, do you know how fast you were going? And he says, no officer, but I know exactly where I was. And the policeman says, we just clocked you doing 120 miles an hour mate. And Heisenberg says, great, now I'm lost. Okay, so the people who were laughing know what the uncertainty principle is. Heisenberg basically observed that in subatomic physics, you cannot measure the position of a particle without affecting its velocity. So the more accurately you know the position, the less accurately you know how fast it's moving. And the more accurately you can measure how fast it's moving, the less accurately you know where it is. To measure more accurate velocity, you measure over a longer distance, which means you have more uncertainty around where the particle is during the measurement. To measure its position, the only way to do that is to bounce something off it. Imagine you've got like a soccer ball in a dark hall and you're trying to find out where it is by throwing golf balls. And sooner or later one of them is going to go ping and come back at you. But as you do that, the soccer ball is going to start rolling because someone just threw a golf ball at it. In software, we try and measure things all the time because it is very, very difficult in a lot of organizations and in a lot of programming methodologies and things to tell how good your progress is, to tell how well you are doing. Are you going to hit your deadline or not? And by trying to measure things about the way our systems work, you can actually interfere with them to the extent they stop working. A completely degenerate example of this, someone comes in and goes, look, this whole fixed salary thing is nonsense. We're going to pay you all per line of code. Software is code and we need code because we need to finish it. So from now on, at the end of every day, tell us how many lines of code you have written and we will pay you $1 for every line of code. How many people in here would be millionaires by the end of the morning? Because when they start measuring lines of code, you stop going, no, I'm not doing unit tests anymore. I'm just going to put new lines in. And then you can't, okay, I'm going to put empty statements. And you know, there's all sorts of measuring things. It's quite common in agile. One of the reasons why agile teams and scrum says you should estimate everything in points is because people used to estimate in hours. And then people would use the hours as a yardstick to measure them against. You said this ticket would take four hours. Why isn't it finished? You know, you started at one o'clock and I've checked in with you at two o'clock and three o'clock and four o'clock and five o'clock and it's still not done. And you're like, well, no, four hours was a sort of, if you add up all the tickets we do in a year and divide them by the number of tickets, you probably get some averages out, but these are not absolute accurate metrics. And there's all kinds of ways of measuring things. Anyone ever build websites and then go and run them through the W3C XHTML validator. So you get that green tick that says, you know, well done, good job. Your customers don't care if your HTML validates. The only person who cares about that is you. And there's nothing wrong with the sort of craftsmanship and professional pride. But never once have I had someone who's like, yeah, new product is ready. Customers want to buy it, but we're not shipping it because the HTML doesn't validate yet. So it doesn't matter. That's not the thing you should be measuring. Look at why you're there. What was the force that got you moving in the first place? How do you measure that, you know? If you're trying to be Uber, how many drivers, how many rides, how much revenue? If you're trying to be Spotify, how many bands, how many albums, how many streams, what is it you are trying to do? Because if you try and measure something else, you are going to end up affecting the team and the system that is trying to deliver something because they're going to start delivering the thing that's being measured because that's what they're being assessed on. You know, that's where the sense of value is coming from. So there you go. That's kind of the end of the deep theoretical physics stuff. It's going to get a little bit silly now. Murphy's Law. Murphy was, apocryphally, an engineer working in the United States Air Force in the 1960s. They were doing experiments with deceleration on rocket sleds. So they'd get a crash test dummy and they would put it in a chair on a sledge on a railway and they would put a rocket on the back of it and they'd smack it into a wall as hard as they could. And then they were like, down we didn't actually get any readings out of that. All we did is broke the dummy. So they come up with this idea of using strain gauges on all the seat belts so that when the thing crashed into the wall, they'd be able to tell afterwards how much strain each of the contact points on the harness had been subjected to. And John Stapp, I believe was his name, was the Earth Force Colonel conducting the experiment and Murphy was the guy who installed the strain meters. And the strain meters were symmetrical. They were physically symmetrical, but they only worked one way round. And he installed every single one of them backwards. And the phrase Murphy's Law was immortalized in a paper that Stapp wrote afterwards where depending which version of this you read and where the provenance comes from, is either if something can fail, it will fail, or if there's a way of getting it wrong, that bugger will do it. Murphy's Law is interesting because Murphy's Law is something that you want to keep in your head when you're thinking about user experience interaction and the way that you design and build your systems. So one of the sort of great examples of this is the old joke, when the inventor of USB dies, they're going to lower his coffin into the hole, stop, pick it up, turn it over, and lower it in again. USB is, so for starters, you get systems where you can actually destroy something. One of the interesting distinctions between the United Kingdom where I'm from and most of Europe is that you can't put our plugs, our mains plugs in upside down. With alternating current, it doesn't really matter because life neutral, neutral life, there aren't very many sort of consumer devices that are sensitive to that. But I always find it a bit interesting, you know, I have an extension cord where I put a European plug on a UK four-way brick so I can plug laptops and things in. I had to go and buy a Norwegian plug for it. And I opened the thing up and I'm like, which way around do you wire this? And it's like, actually it doesn't matter, you know. But if it did matter, then every time you plugged in a kettle or a laptop or anything in Europe you'd have a 50-50 chance of blowing it up. Because if the plug fits the wrong way around, and it shouldn't, now USB looks like it'll fit the wrong way around, but then you realize as you get in and there's actually a thing inside it that stops you inserting it backwards. Nine-volt batteries, little, you know, the PP3 batteries, the rectangular ones. You know they've got a little crown and they've got a little knobble on the top so that you can't connect them the wrong way around. It's exactly that, you know. If it matters which way around something is going to be done, make it impossible to get it wrong. Don't build a strain gauge that looks exactly the same until you crash it into a wall with a crash test dummy on it. The Apple Lightning connector, the one they use on the new iPhones, I think is a brilliant example of this because it works both ways. It's like a European plug. You know, it's a USB connector, but it doesn't matter if it's this way up or that way up so you can't get it wrong. You can't have that thing of scrabbling and going, yeah, it won't fit in having to turn over and do it the other way like you do with micro USB. The other thing where Murphy's Law is interesting is thinking about the interactions between your users and your systems because if somebody is going to... You're building an input, say, on a form and you're testing it. Somebody out there is going to do the wrong thing. Somebody is going to do it. The joke I love about this is a bad software tester walks into a bar. They order a beer, two beers, glass of wine, gin and tonic, packet of peanuts, chick, you know, it's good to go. The bar is ready. A good software tester walks into a bar. They order one beer, two beers, a glass of wine, minus one beers, a million beers, infinity beers, a lizard. Because they're testing the edge cases. Because it is all too common for us as developers. Because we have developed these mental models of the interaction models the systems were building, you have a picture in your head of what your user is like. And sometimes when you meet a real one, it can be a real shock because they are nothing at all like the person you had imagined was going to be using your system. So when you are testing, when you are developing, you know, systems and putting things together, don't always think, how can we make this right? Concentrate on the golden path by all means. If your users are smart, they are paying attention, they are doing the right thing, make that as seamless and painless and pleasant as possible. Because user experience is so fundamental to delighting users. But also think about what can go wrong. Because Murphy's law says if something can go wrong, it will eventually. There is a corollary that says it will also go wrong at the worst possible time. But, you know. Okay, so 10 laws of software development. We have Newton, 1, 2, 3. We have had the equivalency principle, we have had the uncertainty principle, we have had Murphy's law. Zipf's law. Zipf was a linguist and Zipf discovered something which is true and is freaky as hell and no one knows why it is the case. Zipf was analyzing the frequency of words in natural language, spoken language and written language. The most common word in English is the. The second most common word in English is of, which in any reasonable corpus of written English occurs about half as much. The third most common word is and. It occurs approximately one third as much. The 5,570th most common word in English is source. And it occurs almost exactly one 5,570th as often as the word the. There is this remarkable power law distribution of natural language and no one understands why this is the case. And when you start looking for it, you find that the same power law distribution starts cropping up in all kinds of places. If you look at the populations of large cities, the United States is a good example of this. The second most popular city has about half as many people as the first. The third most populace had about a third as many people. Fourth, and this distribution follows all the way down. And there is one theory that this is just, it's a product of the fact that we are human beings and we subconsciously as a, you know, society, as a group, we tailor and we optimize the things we have to worry about to keep them at a manageable scale. We want, you know, something which is infrequent. Naturally, we are wary of using it more often because it is infrequent. So it seems alien. The word there is very familiar. People use it readily because they're comfortable with it because they see it all the time. It's always there. It's omnipresent. A word like source, you need a more specialist application for. This 80-20 rule crops up all over the place. It occurs in code bases. If you look at the number of function calls and function routines and cyclomatic complexity of your code, then you'll find a similar parallel distribution. You'll have something at the top. You'll have something that's used about half as much, about a third as much. And this is very closely aligned to something called the Pareto principle, which is also known as the 80-20 rule, which also applies in all sorts of situations. Basically, 80% of your users will only ever use 20% of your features. Have you ever had the experience where you come into work on a Monday, or you sit down and work at home on a Monday, and by the end of the day, you're pretty much finished something? And then it takes you the whole rest of the week just to get the couple of bugs out and get it rendering in this one browser and work out why that unit test is failing? 20%, literally one day out of five. Monday, Tuesday, Wednesday, Thursday, Friday. And then Monday is the 20% of the time where you get 80% of the feature set done and delivered. And then it takes you the rest of the week to do all the little snagging edge cases that really aren't that important, but if you're going to ship the thing, they've got to be right. As I said, 80% of the bugs and defect reports in your software will come from 20% of the lines of code. I'm not going to go as far as saying that on Teams, 80% of the work is done by 20% of the people. But I'm sure some of you are thinking, hang on. And yeah, ZipSlau is, it's true. It holds true for all sorts of data sets and things. It's really interesting to think about when you're prioritizing and you're deciding the feature set and scale of the systems that you're going to build, but we don't know why it's the case. There's just something about things created by people that likes them to follow a power law distribution. And I would be fascinated if one day they actually find out why. So, seven laws, we're going to move on to the last three. The last three are where we start really talking about laws of software, as opposed to, you know, taking laws of physics and trying to make them fit and stuff. So the last three laws we want to talk about are Amdahl's law, Conway's law, and Moore's law. Amdahl's law is about the rate of improvement you can achieve in parallel computation by adding more resources to it. So, okay, good example. They say we're organizing a party. We have a nice big NDC after party tonight, and we need to prepare for it. And we've got, I don't know, 150 people in this room, 50 people in this room say. So we want to make a little bag with sweets and, you know, stickers and stuff for everybody, because it's not a party unless you get a little bag. And we need to go and pick up the cake. And the cake is in Sweden because we messed up the ordering. So we have a workload. We've got 50, say it takes 10 minutes to make one of these little party bags. So we've got 50 pieces of work that take 10 minutes each. So that's 500 minutes, 10 hours, give or take. And somebody needs to get a train to Sweden, which will take four hours, and pick up the cake and bring it back. Take another four hours. So if you, sir, I nominated you and said you're going to have to do all the work because when I did this is your party, you're going to have to do it. How long is it going to take you? So yeah, you've got, you know, 10 hours to make up all the party bags, and then eight hours to get to Sweden, bring the cake back. So you're looking at 18 hours. Say, take short days, long lunches, call it three days worth of work. Get someone to help you. So you say, come on, let's help out. You're going to make half of the party bags here. So you're going to make 25, you're going to make 25. So suddenly you've reduced the workload involved. You can do a day and you can do a day. You've got all the bags made, and then someone has to go and pick up the cake. So someone says, well, this is good. Why don't we get someone else to go and pick up the cake while we make party bags? So we'll get you to help out. So you go to Sweden, pick the cake up. Bang, we've gone from three days to two days to one day. You, you, you, you, you, you, you, you, you, you. Everyone here all decide they're all going to pitch in and make party bags. How long is it going to take? One day. There is no way adding more people can achieve that in less than a day because we have a piece of time-bound work that cannot be parallelized. There is no way that someone can bring the cake back while someone is on the train to Sweden to pick it up. And this, what it does is it imposes an upper limit on the number of effective cause threads, processes that will give you any improvement in how a piece of work is being done. There's a lot of work. So Moore's Law, the other one we're talking about here, Gordon Moore worked for Intel. I'm sure you're all familiar with Moore's Law. He basically had said that computers get twice as fast for the same money every 18 months. Originally his observation was, this was back in the early 1980s I think, and he observed that the number of transistors you could fit on the same piece of silicon was doubling every 18 months. And he thought this would probably be true for about another two or three years. It has proved to be true for two or three decades. You know, it was absolutely spot on. And because so much systems design an interaction and system capability scales directly with the amount of processing power, which is the number of transistors you can fit on a silicon wafer, basically computers get twice as fast every 18 months for the same price. So on the one hand we have systems becoming more and more powerful in terms of the amount of parallelism they can do. On the other hand we have Amdahl's Law which imposes limits on how much computation you can actually, how much of an improvement you can get by adding more process to it. Now what's interesting with Amdahl's Law is think about it in terms of people. So go back to our example there. So there we've got a project that we cannot parallelize. There is no way we can organize a party, you know, it's hop us three now, the conference finishes it, hop us six. There's no way we can have our party ready to go in three hours, no matter how many people we hire to help us organize it. If you are working on a software team and you have activities which cannot be parallelized and are time bound, they impose an upper limit on how much faster you can go by adding more people to it. The classic example of this is meetings. If you have eight hours worth of meetings that are necessary because that's how your project works, there is no way you can hire enough people to do that project in less than eight hours. So it imposes an upper limit on take the total amount of time you've got 20 days of work to do except two days of the meetings. You can get 10 people on that team, but the 11th person will have nothing to do because the meetings are happening and everyone else is already busy. And that's it. All of the work is being taken care of. All your tickets are in flight. Everything is already happening. So these patterns of locking components, if you think about distributed system design, so imagine for one second that we had an online commerce website. The customer is going to connect to our website. They are going to place an order. We're going to talk to the inventory system, determine that we've got that particular type of, I don't know, NDC party cake, ordering a cake online because we're not going to go to Sweden. So we go on, we order the cake, check with the inventory. Do we have the cake? Yes. Respond. Check with the account system. Is this customer validated? Yes. Okay. Do you have a message to the warehouse system? Please ship this cake to this address. Yes. And then send a response back. And those systems may take, you know, three, four, five seconds to recover. But what we're going to do, whenever anyone hits our website, we are going to set up a massive distributed transaction and take an atomic lock on every single component in the entire system until all of them, they are not allowed to deal with any other requests or process any other work. So whilst we're ordering the cake, the website is effectively down for everyone else in the world. The inventory warehouse system is down for everyone else in the world. The stock control system is down. The delivery system is down. We look at it like that. It seems absolutely crazy, doesn't it? No one would ever build software like that. But you'll call half a dozen people into a room and sit them down to have a meeting and say, none of you can do anything else for the next two hours. We are getting very, very good at modeling distributed systems in terms of autonomous actors that have their own workload, they have their own inputs, they have their own outputs, and they can deal with the work that comes to them as effectively as possible, and these systems are easy to scale up. And this brings us on to the tenth of the laws we're talking about today, which is Conway's law. Mel Conway observed in 1964 that organizations that create systems will tend to create systems that reflect the communication structures of the organizations that build them. This is something that I've done a lot of thinking, a lot of research, and a lot of speaking about. Peter Hinchin summarized this as, if you have shit people, you'll have shit code, which is a fairly pithy kind of synopsis of how it works. What it means is if you've got two teams who, for whatever reason, communicate amongst themselves better than they communicate with the other team, you're going to end with two very tightly coupled components with some kind of interface layer between them that doesn't work quite so well. If you've got three or four or five teams, you're going to end up with three or four or five different components. If you've got teams that talk to each other very readily, then the interface between those two teams is going to be good. It's going to be fit for purpose, because those people will communicate. If you've got two teams who are very bad at maintaining the boundary, you are going to get tight coupling, because two components that were supposed to be separate end up being bound together. So we have these three laws. We have Moore's law, which says that systems are getting cheaper, computation is getting cheaper, everything is getting more expensive. When I first started doing software development, I probably spent about an hour a day waiting for my PC to do things. Now I probably spend about an hour a day waiting for my PC to do things, because we're not using all of this amazing computing power to do the same things quicker. We're using it to do more things. The fact that the amount of time you spend interacting with the system remain constant must mean that the complexity of the systems we are building is doubling. There is a lovely quote, the greatest achievement of the software engineering industry is to completely neutralize the astonishing advances made by the hardware engineering industry. So the systems are getting twice as powerful, but they're not getting twice as fast at actually doing real stuff, which means that the work they're doing is getting twice as complicated, which means we are building more and more and more complicated systems, and the complexity of those systems is doubling every few years. Conway's law says that the systems will reflect the communication structures of the organizations that created them, and Amdahl's law says that you have this upper limit on how effective your team sizes can be if you have things like meetings and work that cannot be scaled and broken down. So effectively, the message we can take, the way that we can synthesize all of these laws into recommendations and things we can think about is how we approach the task of building software differently. We've got to get better at communicating asynchronously. We've got to get better at interacting with teams and organizations. The open source model kind of got this right almost from the word go. How many people have went to a meeting for an open source project they work on? Let alone a meeting at 9 o'clock in the morning where coffee and donuts are served, you know? They got this distributed asynchronous, people do work when they're ready, they retain information and statistics and people can work on it at their own time. That model is going to become more and more widespread because in order to continue keeping pace with the development of the systems we're building, we need bigger teams. And there is a limit on how big teams get if you are doing things like having meetings. So get rid of all the meetings, make a massive distributed team that's basically everybody in the world and love the fact that everything you do is going to be running twice as fast in two years' time, because you're going to be doing twice as much with it. And I think we're going to stop it there. Thank you. APPLAUSE Are there any questions? I love it when there's no questions. It means you have all achieved complete enlightenment. So yes, thank you all very much for coming. Enjoy the last bit of the conference. Staying around over the weekend, come to Pubcon on Saturday. It's going to be awesome. Thank you very much for coming.
It's easy to think software is magic - but it's not. Most of the time, it's not even sufficiently advanced. Like everything else in our world, the people you work with and the products they build are subject to the fundamental laws of nature. In this talk, Pieter Hintjens and Dylan Beattie will explore the laws of our universe - from Amdahl to Zip, from Newton's Laws of Motion to Heisenberg's Uncertainty Principle, from Conway to Murphy to Godwin to Moore. We'll discuss how they apply to your projects, even when you want to pretend they don't, and explain why if you ignore them, your work will collapse like a badly-designed bridge.
10.5446/51696 (DOI)
All right. Looks like it's nine, so it's time to start. Thanks everyone for coming. I'm impressed. This is a bit too early for me, so if I say something that doesn't make sense, then it's probably because of the early hour. So I'm going to be talking about, I actually put up a very boring title when I looked at it yesterday. So the topic will actually be much more fun than the title might suggest. I'm going to be talking about UK house prices, which is a fun topic. And we are going to do some time series and data analysis. And it's going to be reasonably big. I have some larger data sets that I'll show you as well. The other base that you'll see is various very interesting F-Sharp community projects that happened in the area of data analytics and working with big data. My name is Tomasz Petsycek. I work with F-Sharp Works, which is an F-Sharp consultancy. And we do trainings and help people with F-Sharp. First of all, I have to say this, reminder, F-Sharp is a general purpose language. And I think there's this sort of misconception that F-Sharp is only good for data analytics and some sort of science-y stuff. That's not really the case. And I think the other talks here at NDC did really nice work at demonstrating this because there was talk about doing user interfaces with F-Sharp. There's lots of interesting libraries for that. There was talk about doing web development with F-Sharp. So really, the language itself isn't restricted to some particular domain. And it works great in many areas. And I'd say you can use it really nicely in lots of areas where there's some libraries for it. So there's great F-Sharp web libraries. There's lots of other interesting things. But I will be talking about the data analytical part, which is partly because I've been actually working on this. I spent some time in Blue Mountain Capital, which is a hedge fund in New York. And they actually, we did some of the work when I was there. So that's why I'll be talking more about the data analytical part. But that doesn't mean that F-Sharp isn't useful anywhere else. It just means I actually spent some time working on this. And so it's an area where I contributed and I know some things about it. So why would you use F-Sharp for this kind of work when you are working with data, you're doing some calculations? There's completely other answers for why would you use F-Sharp or completely. Lots of them will be similar. But you might give different answers for why would you use F-Sharp for web. For analytical components, the main thing is that F-Sharp is actually a nice programming language which is efficient. It leads you to more correct code. And it lets you do things faster. And I think you'll see some of the reasons why that's the case in the talk. And if you're doing some sort of logic, some analysis, then you might use something like R or Python. And those languages have very rich libraries. We'll see that you can actually use R from F-Sharp. But they're not compiled. They don't have static typing, which are really useful things when you're working with lots of data and you want to quickly explore it. I think the stuff I'll be showing in the talk is actually a nice example of what you can get when lots of people in the community collaborate. And we started this F-Sharp software foundation, which is now a US nonprofit. So you can join and help us sort of make F-Sharp great. Not great again. It's been great always. One nice thing about the F-Sharp foundation is that it sort of brings together all the people who are involved in it. So that involves the language design, which is mostly done in Microsoft research. It involves the various editor and tooling authors, including Visual Studio, including Atom and VS Code, including Emacs. It also includes the various people who are using F-Sharp for commercial projects. And they sometimes open source some of the libraries, which is the case of Blue Mountain Capital. And it also brings together the broader open source community. So you'll actually see all of these involved in the samples. And what I want to show you here, I'll start with just local small data sample. The UK actually has an open government data website where you can download UK house prices. And they have one month download, which is like 20 megabytes. But you can also download prices from 1995 until today. And it includes every single house sale in the UK. So we'll start by looking at the local data set. And I'll switch to Atom. So I'm using Atom and a project called Ionite, which is the F-Sharp integration. And right here, I'll just start by loading some of the F-Sharp data analytics libraries. So FSLab is the library I'll be using for data analytics, which brings together lots of other components. And this is my directory with a file. So what I'm going to do here is just load the CSV file. So that's the data, no SQL format of the future CSV file. And this file actually contains all the sale information for houses in the UK in April 2016. And we are going to look at some of the interesting things there. There's lots of different columns in the file. Half of them, I have no idea what they mean. So what I'm going to do first is actually just get a subset. And this data structure that I loaded, DataFrame, is inspired by DataFrames in R. And it's really just a CSV file, two-dimensionally table-like structure. And I can do lots of operations on it. One of them is I can actually select just the columns I care about. So I care about things like price, postcode, and town. And one nice thing you can see here is that what we've added to Ionite is the ability to display anything you load as HTML. So when I load the frame, I actually get it here as a table. And I can even scroll through the table and find all the different house prices. So let's say I'm interested in, I live in Cambridge, which is a nice place except for the house prices. So maybe what I want to do is to look at Cambridge and find where could I possibly afford something. And we are going to be using the normal F-SARP pipe. So the way you work with these frames is pretty much the same. You work with arrays or sequences. And there's lots of operations in the frame module, like filter rows, where I can say, wherever I go over all the rows, it gives me the key and the data in the row. And then I can say, let's have a look at cases where town is Cambridge. And this filters out some of the, I did something wrong. I didn't run this line. And now we filtered it out to only Cambridge. There is actually some noise. So the other thing I need to do here is to only look at cases where the date is 2016. So if the year is 2016, that's what I want to see. And then the next operation I have on the frame, you can do lots of different things. And I just want to sort it by the price. So if we take this and say, sort by price, now with four lines of code, I actually can iterate over this. And there's some weird things at the beginning. So the house for 5,000 pounds, that's probably not a real house. That's more like a dock shed or something. But if you scroll down to the end, you can see for what is this? 2 million pounds. You can buy a beautiful house in Hills Road. So this is a sort of nice way of interactively exploring the data. Because I can write my code, run it immediately, see the results. And thanks to some of the nice new tooling here, you can even sort of explore the results we get. One more thing I want to do here is I actually want to do some aggregation. So I'm going to take this and I'll just filter out some of the prices that aren't really correct. So I'll take all the houses and town we leave. But I only want the type and duration are telling me something about the kind of sale it is. So duration F means that the UK system, I don't quite get it, but F is the normal thing. So that means you actually own the house. And I think this is all right. So we only get some of the houses from the data set and assign it to a new frame. You can see here that I'm using the usual functional style where I just clone or apply some transformation, get a new frame. But when I do this, it isn't actually copying all the data always because it's all immutable so it can share a lot of the data in between the individual values. So now I've cleaned it. And I'll show you one more interesting operation, which is how can I aggregate this? And there's a useful function for doing that where I can say aggregate all the data by town and apply some sort of folding operation or aggregation on one of the columns. So what I'm doing here is I'm just saying aggregated by town and give me the average price per town. And if I run this and you can see the result, I get average prices by town. And what I want to do now is to take the average prices by town, count how many sales are there in every single town, and plot the 20 most expensive places. But I only want to do it for places where there's more than 10 or something sales so that one fancy house somewhere in the countryside doesn't skew the data for the area. So I'll need to count how many sales are there for each town and put these two data sets together. And to do that, I can use indexing, which is a built-in feature here, where if I say index the data by town, then you can see that it transforms the structure so that I have the town on the left in bold, which is like the key. So I'll have two frames which share the same key, and then I can nicely put them together. So this is average prices. We've got average prices, and I'm going to add counts where I'll just replace mean with count. And now I've got counts. And what I can do next is to say take average prices and add a new column named count, which we get by taking the column in count. So this is the merged frame where if I run it, if I run this as well, oh, I have to run all of it. All right. Now I have a frame where I have key, which is the name of the town, and then I have the price and the number of sales in the place. So I'm just going to wrap this up by using a cheat that I did before. First of all, we take all the data and filter out only towns where there's more than 100 house sales sorted by price and take the last 20. And you can see the results here. And then the next thing I can use here is that I'm going to use xplot, which is a charting library. And this actually has a nice ionite integration as well. So it takes the chart and shows it right in my F-Serve interactive. And if you look at this, you can see that the most expensive houses are obviously around London. And I think it's sort of amazing how clustered it is around London these days. And if you look at the average price in London, it's something like one million pounds. So that's roughly multiplied by 10 to get the corona value. And everything else around is significantly cheaper. So moving to London is probably a tricky problem. All right. So that was one example of what you can do. And I did show you a couple of new things here. The main is the F-Serve interactive on the right, which is a new feature in ionite. And ionite is an F-Serve plug-in for Atom and Visual Studio Code. I was using Atom, but what I was showing is hopefully soon coming to Visual Studio Code as well. Atom has sort of more flexible extensibility models. So you can hack it in lots of interesting ways, like you can actually insert HTML output in your Atom F-Serve interactive. The other interesting thing is that it's very sort of open. And ionite actually supports a lot of the great F-Serve-community tooling. So you have support for the package manager, which I'm not going to say anything after it's a dangerous topic. I don't want to get shot later on. But it also supports things like FAKE, which is a really nice F-Serve-based build system. So you get an integration with lots of other great tools. And the new fun thing we've been adding to ionite is this way of adding HTML formatters. And the sort of plan here is that when you use this, when you define this formatter, you'll be able to use it in ionite, where that's what I've been doing. There's also a project called FSLab Journal, which lets you generate nice reports, which are like HTML reports with text and code and outputs. So it will work there. And I'm also talking with the people working on F-SARP integration for Jupyter notebooks. And we want to make the same model work there as well. And it's really simple. When you have some interesting object structure, you can format as HTML. All you have to do is to say, FSI at HTML printer for your table object here. And then you say, here are some styles and scripts I want to include. And here's my HTML. And it gets formatted. So far, I was looking at just one month of the data, which is all right. But it's not really big data. It's 20 megabytes. So what I want to do next is to show you another interesting project. And that's something that we jokingly call BigDeedle. Deedle is the data frame library that I was using here. So all the frame.something operations, they're coming from this Deedle library. And the library gives you all sorts of tools for doing data transformations, data explorations, like the kind of things I was showing in the last demo. And in the previous demo, everything I was doing was actually done over data that was loaded in memory. But we have another sort of back end for the library that lets you do all the, or some of the operations over a virtualized data source where you don't actually load the data into memory. So what I'm doing here is that I just referenced this library or this DLL, which contains a provider for Deedle that loads the house prices on demand. And over here, I can actually disable and enable logging so that you'll see what data sets is it loading. And I'll just need this helper to create the dates time because creating dates time takes like, it's very complicated. And then this is another helper that I'll use later on. And what I'm doing here is that I just call the provider to give me the data frame that represents all the house prices in the UK for the last, what is it, 20 years, roughly 20 years. So all the house sales over 20 years. And this is exactly the same sort of frame that I was using before. So I can do the same things I was doing before, like pick only the columns that I actually understand. So I'm going to do that. And when I select this and do alt enter, it actually downloads the data here. So if I see what I can actually do here is that I can even scroll through this table. If I actually succeed at clicking on this little thing here, I can scroll down. And you can see this is the logging coming. As I scroll through it, I just picked some place in the middle. It needed to download the data for 2nd and 3rd May because that's where the scroll bar sent me an event. But then it just loaded the data for the 2003, which is where I'm actually right now. So what I'm doing here is that I'm just scrolling through roughly four gigabytes of CSV file with UK house prices. And I can filter it and nothing, it just does it without downloading all the data because I'm just removing some columns. I make it a bit smaller. So what else can I do with it? I can, for example, look at some column. So if I look at just street, I can access the column and it just picks that one column and gives me a single data series with the names. And we'll see you can do a lot more things with it later on. So now I've been working with the frame, with the data set, and you can nicely explore it here in the IDE, but it's never actually trying to load the entire data set. So in this case, it's roughly four gigabytes, but it doesn't really matter. If it was four terabytes, whatever you can put in your storage, it would still work the same. So let me disable the logging. And typically what you can do if you have data in the structure is that you sort of, the first thing I want to do is that I just wanted to explore it to see some interesting parts of it. And now I'm going to just select some small range and do a local calculation over it. So the typical pattern is I just want to understand the data. And before I even try to run something over the entire data set, I want to test it on some small subset locally. So I'm going to take the houses and from the rows, I'm going to do some filtering and just take one month of data from in April 2010. So the idea is I want to compare the data I had for 2016 with similar month in 2010. And let's just save this. And one of the functions, the helper that I defined earlier, materialize, is just the helper that actually forces the download of the data. So now you can see this will be running for a bit because it needs to issue a couple of requests to the data source to actually download all the prices for the one month period. And now we have it. I'll say a bit more about how this actually works later on. So now we've actually downloaded one month of data and we can play with it locally as before. And I'm just going to copy the same code I wrote before where we take the data and aggregate it. So this is exactly the same thing I was writing before. And now if we take the last or the 20 most expensive places and plot it, then you can see the difference between the chart I was showing earlier somewhere. If I can scroll through this and find it, where was my chart? There it is. So this was how 20 most expensive places in the UK looked last month with like big red bulb in London, and this is how it looked 10 years ago, and you can sort of see if I zoom in, there's a bit more diversity. Like there's actually orange. It's not just green and red. And the average house price in London was half of the one this year. So over five years it basically doubled. All right. So here I think what was really interesting here is that instead of using the local dataset which you can load from a CSV file, I was using this virtual big dataset which was loaded on demand and the tool link and the libraries all are actually able to cope with it because they only access the bits that are needed. I could even show it here in the output in the grid because the grid also only asks the data frame to give me the data that you can see on the screen. So as I scroll through it, it asks the frame, give me this range, and that works because I can always load one part of it, one range. So this is the name of the library, and I'll give you all the links later on. So DL is basically an exploratory data frame and time series library which means that it lets you really easily drill into the data and see what it looks like. And with tools like Ionite, that's actually so much nicer. It has this in-memory data source which is very well tested. It's what Blue Mountain is using. And the virtualized data source is something that's sort of more new. We've been working on it, and it lets you do very similar operations but over data that's not actually in memory. Now this is something, if you wanted to use it, it's not tied to one specific data source. So it's not like you have to have Azure or you have to have Cassandra or you have to have whatever. You just have to implement two interfaces. One is basically defining the addressing scheme. So how do I map the keys to some actual offset in the frame, and what's the next address, and so on? And there's samples on the internet that do this for sort of partitioned, when you store your data partitioned by month or partitioned by day, there's a nice example showing that. This is what I'm doing here. And then you have to define this virtual vector source, which is the thing that actually loads the data. So it takes a little bit of work to actually do this, but it doesn't matter what data source you're using. Here I was using one example with the houses. That's just pulling the data from a REST-based API. So I have a REST-based API on Azure written using suave that lets me say, give me data for this day. And that's all it exposes. And that's all I needed to actually be able to show it in the frame. So the one thing that I was doing here was that I always worked with it locally. So even though I have this big data set, I only downloaded one month of the prices to my machine and then did some calculation there. And this is very nice because you can actually play with it and write your computation and see if it does what you want it to do. In the sort of true F-sharp interactive spirit, you always spend a lot of time writing your code in F-sharp interactive, running it, testing if it works. And only when you actually wrote it and it does what it's supposed to do, you'll run it on some larger data set or you'll turn it into an application that you can put in production. So here, we've done the interactive bit, but now we want to do some calculation over larger chunk of the data set. And for this, I'll actually need to reset my REPL. The reason why I need to reset my REPL is I'll be doing some cloud computations and when I send, I'll be able to send some code to the cloud, but I don't want to send it all the data that I already loaded in my REPL because it would be in scope. So what I'm using here is another really cool project called Embrace. And don't worry about the names, I'll have a summary slide and I'll give you enough time to take a picture later on. So Embrace is a library that lets you do the same sort of interactive programming style that you're used to from Fsharp, but run it in the cloud rather than running it locally. And I actually started my cluster in the morning, so assuming Azure hasn't consumed all my money, this will work. If you want to test it, you can do it on a local cluster, but then it misses the point. It's not funny. It's no fun when you have clusters spinning on your machine and it's like 10 different machines, but it's all on my little laptop. So this object here, cluster, is the connection to the cluster and I have my houses here as well. So the test is pretty much the same. And what I'm going to do is I'll start, this is a function. So I wrote this function before, which just loads the houses, gets a range over one, like between two dates, gets only the sales in a specific town, which are freehold, which means you actually own the house, and then it just calculates the average price. And rather than running it locally, I want to run it in the cloud. So what you do first is you say cloud. And this is using the F-Sharp computation expressions to basically wrap the function and change how it's executed so that rather than running locally, we can ship it off to the cloud. And now I can define my function and we have a cloud function here. So what I'm going to say is, so what do I want to do? I just want to do it for, let's just look at April. So if I say dt 2010 for one, to dt 2010, five one for London. Now this expression represents a cloud computation. And what you can do with it is that you can say cluster create process. And this will create a process. It will basically take your code that you have in your F-Sharp interactive, connect to the Azure cluster in this case, and send the code there and start it there. So it prints some logging and it created a work item for the cluster. Here I'm using Azure, but they also have AWS backend for Embrace, which is a fairly new thing. And I can then check status and it's done already. So this was fast. And I can access the result. So this is the average house price for London in 2010. Now this wouldn't be all that interesting. We just processed one month. But what we can do with the cluster is that I can say let's go over years from 1995, which is the first year to today. And here I'm building a list of cloud computations. So we're going to say calculate the average price for London in this case. And I need to change my years. So I'm just looking at April because I don't want to burn my Azure compute center. And then we are going to return year together with the average price. And so this is average London. And now before doing create process, I can say cloud.parallel. If I know how to type L, cloud.parallel. And what this does is that it takes the list of cloud computations, which is like 20 processes, and it distributes them over the entire cluster. And if I look at my computation status, it is running. And I can also look at the cluster here. And I've actually written a little format for the cluster. So here are all my eight machines in Azure. There's the CPU usage network because they're actually fetching the data and active work items. So it evenly distributed the work across my entire cluster. I'm going to do the same thing for Cambridge. So I'll just do some copy paste here. This is the nice thing about doing things in an interactive way because you don't have to be clean. You can just copy and paste things. You can clean it up later when you know it works and it does the right thing. For now, we'll just start the same sort of computation for Cambridge. And when it starts, we can check London is still running. Cambridge computations are running too. And here's my cluster. You don't actually need that strong internet connection to ship the work to the cluster. I was even able to run it in my hotel where even checking Twitter is a bit of a problem. So when this eventually finishes, we'll get all the data. And what I want to do then is just to compare the prices, the price changes in London and in Cambridge. So I'll write the code for that while it's running. I'll take the Cambridge prices and London prices. This is result. Draw a line chart and then add labels saying this is Cambridge and the second bit is London. All right, so how's my cluster doing? Zero work items, so it looks like everything's completed. Yes, it is. And we can get the data and draw a chart here. So yeah, this is the interesting growth of average prices in London, although you can see that it actually went down for the first time in a long time in the last month in Cambridge. It's still going up. You could fit some model for it and it will tell you that in next year, you won't be able to buy anything anymore. Cool, so this worked. And what's really interesting here is that we took the same sort of interactive style that lets us easily explore data locally and we used it to run the same code in this case on an Azure cluster rather than locally, but you don't lose any of the nice sort of exploratory style, which is I think what makes data analytics in F-Serve really powerful and really efficient. You can just say cluster run this and then check what's happening in the cluster once you wrote the same code, run it locally and test it that it works. And the amazing thing here is that the transition from the local code to the cluster code is really just a matter of adding this workflow. And if you went to the, well, no, this is actually now in C-Sharp as well, so you don't have to go to the future of C-Sharp. Talk to see this. This is the presence of present in C-Sharp. So in F-Sharp, I think in 2008, the new feature in 2008 in F-Sharp was this async workflow, which lets you, it's like async evade in C-Sharp, lets you run code without blocking. And this was done as a more general purpose feature. So these days when cloud becomes a thing, what the Embrace project is doing is that it's just taking the same programming model, the same construct, but defining another computation that rather than doing things asynchronously, it does things in the cloud. And I think this is really sort of amazing power that you get from F-Sharp here, because without changing the language, they were able to build a library that uses the same principles for something that's really important, really cool today. So the project is Embrace. You can find more information on the website. And it basically does what I was showing here. It takes this nice data scripting approach, but brings it to the cloud. It has the cloud computations, which is the curly brackets thing that I was showing. It also has sort of data flow support. So if you have lots of data that you want to process, you can either, you can do it as I was doing here with the frames, but you can also just use the Embrace programming model where they have sort of nice optimized data flow streams that you can process across lots of different machines in the cloud. So that's Embrace. And I have one more sample, which is moving away from the house prices to some financial data. And I just want to show you a few more things that you can do with this. So let me close some of the things. And here, I think, everything I have here, I already loaded. Now this time, I'm going to be using another implementation of the virtual frame. The previous one was sort of fairly simple, and it used REST-based API as the source. Here I have, I'm using the Azure tables, but again, it's just an implementation of an interface. So you could use anything you want. You could use your own data sources that are local in your own little cloud if you have one, or you could build one for AWS, whatever. I do have my cluster. It's still there, hopefully. And this frame, WDC, it's, I think, all the trades in New York or in NASDAQ or somewhere for the Western Digital Company. And this is actually, this is, I think, it's price, it's like every single trade with the company stocks over some number of years. So you can see that even downloading the first and the last, the first and the last frame, the first and the last day takes a little bit of time because they're basically, like, at the beginning, there's not that many of them, but if you scroll down a bit, it needs to download the second day. If you scroll down a bit, you'll see that there's, like, every second there's a lot of trades. So now I'm just, now I just scrolled, so it's logging that it needs to get the second, second day of data. And you can scroll through it, you can explore it interactively. If I wasn't connected over Wi-Fi to Azure Data Center in New York, then the scrolling would be faster. And as I was saying before, with these, with these virtualized frames, you can actually do quite a lot with it. So I can, for example, take the ask price as a series, and it just gives me one sort of view of the data, but you can even do some calculations with it. So for example, if I wanted to see what's the difference between the ask and bid price, then I can just say, I can just say, get one column, subtract from another column, and you get the difference. And this is still keeping the sort of, it knows that it's still virtualized, so it's not getting all the data and evaluating anything, it only does it when I actually want to see it. And you can do all sorts of calculations here, like multiply it by something. You can even use built-in FCR functions, like round. It still understands that this is just the operation applied to a virtual frame. I can also add this as a new column, so this is the only place where I can actually mutate the frame, where now I have the bid and ask and the difference and so on. And the next thing that I want to show is doing some other interesting calculations over this. So what do I want to do? I want to take just one day of data, so I'm going to say rows. Well, actually, I do have a trick here, so I don't have to write this. Here I'm saying take this data range, which is the first day in my data set and take the differences. And if you're doing some financial calculations, then indeedle, if you have a time series, you can do other interesting operations on it. So what I want to do here is let's just take moving mean, which is fairly basic, but you can see there's a lot more interesting stuff. And then I think, oh, I need to do one more thing, which is the keys in the series are actually date time offset values. And our charting library can only deal with date times, so I need to transform the key so that it's date time. But that's something that will probably get fixed soon. And I can draw a line chart of the moving average over one day, and I did something wrong. So did I do wrong? Let's try this. And now my atom hangs for a bit because the charting library actually has a tough time rendering all my data points. But this is the average moving prices, which is one sort of thing you can do. The other thing you can do is do various sort of resampling of the data. So this is not what I wanted. Prices to sampling. So here I'm again taking the prices, and I'm calling this series.sampleTimeInto function, which will basically split it into regular one-minute chunks and calculate the averages over one minute. And the next thing you want to do here, so very often, or in Fsharp, there's quite a lot of libraries that you can do for doing some numerical computations. There's math.net and so on. But equally, if you know R, then in R, there's like thousands of packages for pretty much everything that has to do with numbers. And so it would be nice if I could use some of the financial libraries that people have written for R. And in FSlab, one thing that's there is this thing called RTypeProvider, which actually imports all the R libraries that you have installed in your local R installation, and it lets you access them from Fsharp. So there's like one of the packages is the stats package. That's a built-in one. There's also like if you're doing finance, there's one of the many financial packages is QuantMod. And so I can use some of the R functions on my values. So what I can say is to say R. And now I actually see all the R functions that the RTypeProvider imported. And one of them is this dealt, which basically does like a returns on prices. So it takes the current price, subtracts the previous price, and it calculates like how much you would have gained over the period. So we can apply this on our values. And I think this gives me a result, which is some R value, and I can convert it into just a sequence of numbers that I can then sort of get back into my Fsharp world as a series. So if I run this, it actually invokes R on the data I got from the remote storage. And here I'm running the R locally. I got all zeros, which is just because my formatting is actually set up to do like only two decimal points. So if I multiply it, I've got some returns here. So here I was doing, again, stuff locally on the data that I got from some remote storage, which is actually what in Blue Mountain people do all the time, because you want to test some strategies locally. And the last bit I want to show you is that this works in the cloud as well. And again, I'll need to reset my Fsharp interactive so that I'm not pushing all the local data into my cloud. I'll need to reload some of this again. And I need my cluster. And I need this as well. And I'm not going to write everything from scratch here, because we don't have that much time left. But the first thing here is I have a function, mean minute returns, which is calculating, and I need to open my R provider. So this is calculating average returns over one minute. It's using the DL functions for doing things like sampling. And then it's calling R to do the actual calculation. And I can run this over using the cloud blocks that I was showing before. I can run this over three different months. So here I'm looking at the first quarter of 2015. And when I run this, it will actually send all the code I wrote here, send it to the cloud, and start it there. And now I have this Q1, which is my handle for the process that's running in the cloud. I can see it's running. And if I look at my cluster, it created all the work items, and it's running there. And when it eventually finishes, I'll show you a chart. But before that happens, the few important points here is that one thing that's really nice in this demo I'm running is that I have the data stored in an Azure data center somewhere. And I created the Embrace cluster, the compute cluster, so that it's in the same data center. So rather than downloading the data and do some calculations locally, I'm actually shipping the code to the same place where it can run much faster because the data access isn't downloading anything. It's just looking at a machine across the corridor or something to get the data. It's still running because we're actually processing a fairly large chunk of the data set. But I think you can see it's getting closer to the end. So let's leave it running for a bit. The other interesting thing I was doing here was using the R-type provider. And I actually didn't show you any other type providers. This is probably my first talk where I haven't done that. But lots of people here were showing other type providers for things like JSON. But I was only doing the R-type provider, which is this really, it's using the very general F-sharp mechanism to import all the R functions. And I think that the general theme here is that with F-sharp, it's really easy to integrate with lots of other environments. So if you're pulling data from somewhere from JSON or CSV, you get a nice typed access to it. If you're calling R, you get nice typed access to it as well. And let's see what my cluster is doing. It's still running. The other interesting thing that I had to do here, which I didn't show you because it's boring, is that when I created my cluster, I actually run another cloud computation there that installs R on all the machines in the cluster. And then it installs all the dependencies. So you can actually do lots of weird things to these clusters, including running arbitrary code that installs stuff there. This is the slide, and I'll tweet it with references to all the things that all the libraries, all the websites, where you can learn more about what I was doing here. So F-sharp.org is the best place to get started. Ionite is the F-sharp plugin for VS Code and Atom that has this nice integration with HTML formatters. FSLab is the data science sort of components all in one place. And Embrace is the scalable computing project. And I was using the DEDL library for doing the exploration, the R-type provider for calling R and accessing all the different statistical functions and Xplot as the library for the charting. So let's see if this has finished. No, it's still doing some stuff. I'll leave it running, and you can probably find me later if you want to see the results. It needs a few more seconds to complete. If you want to remember just one URL, it's the F-sharp.org where you can find links to all the other things. And I'll be around if you want to chat more or see the pretty chart. You can come to the FPLab at 1.40 in room 10. It's like by the... If you enter the building, just go straight, and then it's somewhere up there hidden. This is where all the functional people go at 1.40 to chat. In this room, there's going to be more FB sessions. I think we've done all the F-sharp ones, but there's lots of exciting LX-tier talks. And I'm part of F-sharpworks, which is doing consulting and F-sharp training. So if you want to learn more or if you want to help with implementing all the weird interfaces that I carefully designed so that only I can implement them, you can talk to us. Thank you. We probably don't have that much time for questions, but you can just find me here and chat later.
Working with small time-series data is fun. You can easily load daily Microsoft stock prices into memory and find the most successful year it its history. Or you can download average daily temperatures in your city over the last 10 years and try to spot a trend in a chart. But what if you have prices at millisecond frequency for thousands of stocks or high-resolution temperatures for the entire globe? With the right tools, working with massive time-series can feel the same as crunching through hundreds of observations in memory. In this talk, I will show what's available if you are using R, the .NET platform and Azure. We'll use Deedle, a scalable .NET data analytics library, R type provider that makes thousands of R packages available to .NET developers and MBrace, a cloud computing framework that can easily scale your data analytics over an Azure compute cluster.
10.5446/51705 (DOI)
We are billed. We are lued. Come on. Okay, let's do this in English. I promise. And we all pretend we understand each other in Scandinavia, but sometimes things go wrong. So let's do this in English. My name is Mess Torjussen, and I'm from Denmark. I know that this is the first time slot after the party, so I'll try to shout and make things really uncomfortable for you. Actually, if you could experience the bright light that's coming into my eyes, you would wish you had gone home early last night. And it probably helps a little that you had to pay for the beer, but still. So I work for Microsoft. It says there on the slide, so it must be true. I'm on the C-Sharp team. I help evolve the language, right? So what keeps me off the street, essentially, is to always come up with more stuff to put into that poor little language. And that's what we're going to talk about today. So I can't really see you because of the bright spots, but I'm going to try this anyway. How many people here are.NET developers? Okay, if you, how many, can we try the other way? How many are not? A very tiny few, and all the people who are asleep, maybe. So, okay. All right. Then I'm not going to try to convince you that.NET is a great thing because you already, you already believe that, apart from the two of you. And you're expendable. So, given that you're here and not at the HoloLens presentation, I will assume that you have a little bit of a hangover and didn't want the 3D kind of getting you sick. I'm going to try to show some data here on the assumption that you're too, that you're too sleepy to see through it. C-Sharp is actually doing pretty well. So Stack Overflow do a, they do a developer survey every year of Stack Overflow people. So it's always skewed to like who's already on Stack Overflow. And different languages and technologies have varying degrees of that. But, you know, given that, C-Sharp is doing pretty well. Like, it's one of the most used languages there. Not the most, but on the other hand, one of the ones above isn't a language, really. I'm not talking about JavaScript here. It's not a programming language at least, SQL, right? So, doing pretty well there in terms of the numbers of users. Despite certain limitations in where C-Sharp is usually used. So we're pretty happy about that. And what I think, what kind of really warms my heart is this next one, which is the most loved technologies. You know, they ask people, you know, would you like to continue, given that you're already using this technology, would you like to continue using it? And we're on there. We're not near the top. But we're on there. So we're both like very used and a pretty, pretty well loved language as well. We're kind of proud of that. There's only like these three down here that are actually on both lists, right? Either a lot of people use you or a lot of people love you. It's kind of weird that way. But we're kind of doing both. So we must be doing something right here. And we keep trying to think about what is it that we're doing right and how can we prevent ourselves from stopping doing it, so to speak. That was a lot of negatives. Let's just give a shout out to F-Sharp. It's high on the list as well there. So what is it that we're doing right? Well, we think some of it is we've sort of been lucky striking the right balance in evolving the language. C-Sharp is not a young language anymore. It started in the Java ages. Of course, it was a completely different language than Java from the outset. Like then very, very, any similarity was accidental. And so how can you keep a language like that relevant? How can you keep it going? What's the best way to keep it going? Given that it has a lot of uses already. And so we kind of have this sort of a balancing act there that we try to keep our balance on. You know, on the one hand, we want to keep improving it. Like the world of programming languages is evolving. There's lots of fancy stuff that people can do in languages now that they couldn't when we came out. So we want to improve and be good at a good language as well. But at the same time, we can't take out what's there because then we're going to break a bunch of people who are already using it. And we want to also stay simple. So we kind of have to be really careful about what we choose to add and make sure that when we add something to the language is really worth it. Every time we do, it gets bigger. We get further away from simplicity and we can never get closer to simplicity again. Also, there's sort of a balancing act between people already, like most of you who are already invested. We want to improve your experience and therefore we kind of have to take your scenarios, the places where you're using C-sharp, we kind of have to take those to heart and improve incrementally in those areas. But at the same time, we want to be a language, we're ambitious. We want to be a language that people choose and pick up. We don't want to just be, you know, hold on to our current community and just go from there. We also want to invite new people in. So we have to also be attractive to new users and how do we do that. And then there's a whole, like a lot of the evolution in programming languages in general involves different paradigms than from where we started, right? So there's a lot of functional programming going on and we want to embrace that thinking and you'll see some of that in the talk. And at the same time, we have to stay true to the spirit of the language. We have to make sure that it still feels like C-sharp. So there's a whole lot of taking these ideas but working on them until they fit, like kind of making them fit well into C-sharp. That's sort of our job there. So I hope that kind of gives you a glimpse into the forces that affect my everyday life, not that you care about my everyday life, but now there it is. So a lot of things have changed in.NET recently. You probably most of you have noticed some of it. And I think it's really exciting. I think we're really expanding our opportunity in many ways. Of course, a big one is that it used to be that.NET was primarily at least a Windows technology and that is really changing, right? We've done several things. We've embraced Samarin, which is all about targeting all the big mobile platforms with C-sharp and with a lot of source code sharing, like with the vast majority of your source code being shared across. So that's like, that's a value proposition for any language. They so happen to have done it in C-sharp and that's really, really super lucky for us. So now, you know, C-sharp on every mobile device, even Windows phone. Oh, it was there all the time. And also, we started this big project called.NET Core, which is about getting really good.NET, like modern.NET on Mac and most importantly on Linux, right? So you can do your server jobs in C-sharp, run them really efficiently on other platforms. The other sort of architectural changes come along with that, like.NET today's, in traditional.NET, it's sort of a system component. So that's all this. We have to worry about which one is installed in your operating system and so on. And with.NET Core, you can just deploy the.NET runtime with the app and it always works, right? We're making headway in compiling actually C-sharp to native code or compiling IL to native code and so there's some performance improvements there in certain scenarios. If you're willing to take the trade off of compiling for a specific machine architecture and you have a good deployment story for that such as an app store, then that can, that technology I think will improve over time and become an option for many of you. Sort of more on the developer's side, if you will. We used to have, it used to be that the compilers, the stuff that understood C-sharp was just this black box, right? You just run the compiler and it either gives you some errors or if you're really lucky it gives you a runnable program. And we've completely, completely changed the story there. What many of you have heard of is Project Roslin. I'm going to touch a little more on that. So that now the logic that understands C-sharp is now an open API that everybody can, that's full fidelity and everybody can rely on. And actually I do want to call out, at this conference, many of you maybe were at talks that talked about.NET Core and Samurai in the previous days. Today there's a talk about one third party framework that is using, that's built on top of this project Roslin and top of this open API. So really opening up the ecosystem for language tooling, if you will. There's also one on the show floor down there, OS code, which is an enhanced debugging experience based on Roslin. So it's really kind of starting to take root as a way that a booster, if you will, for language based tools to go on them and have a much improved experience. Used to be that C-sharp and visual studio were joined at the hip. We still kind of are a little bit, but the advent of Roslin actually means that we've decoupled the sort of editor based, editor level understanding of C-sharp from visual studio in a major way so that we are now seeing C-sharp supported in many, many editors, not just the Microsoft ones. It's a project called Omni-sharp, which is specifically taking the Roslin language engine and targeting it to a bunch of different editors from Emacs through Adam and Sublime and all over the place. So we're kind of breaking out of that particular Microsoft tie-in as well. And of course, the big one, we're now open source. Okay,.NET is open source. We get contributions, like people fix bugs and add features to C-sharp and to the.NET stack. And it's really, really awesome. We had a whole like port of.NET that was done as a third party contribution. So it's much easier for people to see what we're doing. It's much easier for us to get feedback, to get improvements incrementally. So we really, really love being open source. I don't know why we were so scared of it before, but we all get, we all get smirled. Okay. So these last three ones are sort of the ones that are closest to what I'm working on and the ones that are the background for this talk to come. But just keep those in mind as the.NET landscape really changing. Right? So speaking a little more about that Roslin project, the idea here is that they really should only need to be one code base in the whole world that understands C-sharp. Okay? If we can do a good job, we can build something that will satisfy all your needs when it comes to understanding C-sharp. Offer it up as an API and it will be efficient in all your scenarios and so on. That is really, really hard to achieve, but we kind of did. It just took us a very long time. So we started talking about this probably seven years ago and it's out now. It came out last summer and it kind of does what we wanted it to. So we're pretty happy. Like, so as I said, it underlies all the IDs and editors. You want an Eclipse plug-in, you know, fine. It will support that. Any kind of analysis tool, linter, code quality thing that runs sort of offline, you can do that. If you want to do incremental stuff in the editor, you can add your own. We even have a framework for adding what we call analyzers and fixers and you can build your own refactorings really, really easily. I'm not going to show it today, but it's a beautiful full fidelity language model in the API where you can just say, oh, you can just talk about the code as an object model and it's really easy to build your own little tooling and click it into Visual Studio to other things that are starting to understand these analyzers and fixers. So in a sense, it's kind of actually taking the language experience to a new level and I think we're sort of leading the way a little bit further languages in creating an open ecosystem around the tooling for the language that everybody can plug into. I think we can, with a little bit of luck and with your help, you know, building some tooling around the language and sharing it out, we're actually on track for being one of the most magnificently tooled languages in the world. I'm boasting a little bit here. I shouldn't do that. I'm in Scandinavia. Source generation, you know, you want to build your own source code, beautifully factored, you know, do it with, do it with R-Styling scripting, Rapples. I'm going to show the Rapples, the redevelop print loop in a bit. Oh, and it does compile too, right? So it actually will produce IEL if you really want to. Okay. So really quickly, another thing that's sort of happening is that we're becoming more agile, becoming more decoupled from the Visual Studio cycle. And so I think we're moving to evolving the language in a sort of more incremental, maybe faster fashion. Okay. So C-Sharp 7 is already imminent. We just released C-Sharp 6 last year. That's increasing the cadence quite a bit over what we've been doing in previous years. C-Sharp 8 probably won't be that far after. And we've been talking about maybe having point releases in between where whenever we're finished with a feature, why not ship it, yank up the dot number of the release and let people use it if they want to, right? So it's something you can opt into. Well, we agreed across my organization that we will use C-Sharp 7.3 and you can take that, Chirin, if you want to. And I'm going to show you stuff from C-Sharp 6, C-Sharp 7, and C-Sharp 8 today. So why don't I go ahead and do that? Stop talking and show us some code, right? Who's asleep? No one anymore. Okay. Let's see. What am I doing? Here we go. Okay. So let's start out. How many people have used the REPL yet? The C-Sharp interactive window? Ah, not enough. How many people are using Visual Studio 2015? Oh, my God. You're cheating yourself. Do you have Update 1? Or better? Yeah. Yeah, most. Come on, guys. Look at this. View other windows. Okay. But it's there. You should go into other windows sometimes. It's not that stuff is less important. It's that it was added later and there wasn't room in the big one. Okay. Look at here. C-Sharp interactive. There's also a Visual Basic interactive. That's actually kind of surprised to me. I hadn't seen that before. Okay. But let's do this one. It says C-Sharp in the title of the talk. Okay. This is an interactive window. It's like, do you know REPLs? Okay. You know, you type some code, it gets executed for you. I type 3. Let's do something advanced. 2 plus 2. Hit enter and, you know, compile. Think about it. It's warming up a little. But, hey, it calculates it for me. This is a calculator. Great. But it's also running C-Sharp interactively. So I can define a variable. Let's define a variable like this. Yeah. And now I can say no. Yeah. And I say no. Yeah. And you say no. Only the Norwegians get this. So let's write some code. We can show some of the C-Sharp 6 features here. I actually already, the REPL comes with the using static. How many people? So a lot of people have seen C-Sharp 6. Maybe a lot of you haven't yet. It looked like about half and half. I'm just going to show a few features from there. C-Sharp 6 had a lot of, like, features to reduce boilerplate. So I'm going to show three or four of them. So using static is nice. It's where you specify a type instead like system.console. And then all the static members of that type are put into scope. This is actually, this is a catch-up feature for us. Java has had this for, like, 10 years. And we're like, okay, let's do it. And it's actually great. That means I can just say right line, right line, right there. Right line is the static member of that type. It's now in scope. Great. Okay. The other thing, I mean, you use it for things like system.math. I don't want to say math.sign, math.square, blah, blah, blah, blah, blah, blah. Just say, you know, other languages have done this for years. Now we can do it. So we can right line the, you know, let's write line an interpolated string. How many people have seen interpolated strings? Okay, about half. Also, a thing that other languages are starting to have, our errors are better. Interpolated string is a weird name that sort of just became, like, many things have weird names like lambda. We just go with the flow here. So we can say, hey. And then put a hole in our interpolated string and say, no, yeah. And when we execute that, it's going to take the value that was, that was a value, you know, this is just an expression in here. You know, many languages, you can only put a variable or something to something. You can put any old expression in C sharp here. We, you know, I don't know what's up with their parsers. Like, we're, we can put whatever big expression in here and it gets evaluated and the result gets substituted in. It's nothing, nothing deep. Okay. Let's take that and put it inside of a method. So let's define a little method. Let's call it void greet. And then this has a very simple method body consisting of just an expression. So let's use expression body members. And you're thinking C sharp as well. If you just, if you resolve, if your method body or property getter body was just a return and then some expression, then get rid of the boilerplate and just put a, this function arrow here from, known from lambdas, popular from TV and, and just stick that there and you have to, you have to find a method, right? So when I say greet, it says that. Okay. No surprise. Let's do, let's do one more thing. Oh yeah, name of, strange little feature, that surprisingly useful. So name of all it does is to take the name of something and give it back as a string. Okay. So that's pretty lame. Except it, the good thing about it is it, it, it only works if there is a something of that, of that name. Why is that a good thing? Well, you know, when you're like, when you're printing out, you know, throwing an exception and putting the argument name in there or when you're using reflection or something like that and you actually need the name of a thing and then you make a refactoring and the name of that thing changes but your little string literal didn't and you have a subtle bug in your code. Well, for all those cases now use name of, it's a longer string literal and it gives more errors. That's great. You want this. Okay. So now I can say name of here instead and it's no longer, it's no longer going to be, new Norse, it's going to be Bokemore because that was the name of the variable. Right? I can only do this demo in Norway. You should really appreciate this thing. So is that it? No, let's do one more. Let's see what time we have. We can do this. Let's say that, let's say that, what do we do? Yeah, let's go back up and declare the method like this again. So if I, let's say Norway is null, don't try this at home. Let's say it goes away, right? Oh, never, it probably says in your national anthem somewhere, never shall Norway perish or something like that. Just for a second, let's just say it's hidden in fog. So I might not, I might have, I might want to write something that is, you know, I might want to take advantage of it, not being, not being, let's say that I want to print out, I have code somewhere that takes the length of Norway. That's about, you know, what is it, 2,000 kilometers or something crazy like that? So it's actually an error right now because Norway is hidden in fog. If I wanted to be on the safe side and say, give me the length of Norway, is it, if it isn't hidden by fog, I can use the question, the null conditional operator, the question mark here, which will let you, well, still goes wrong. Oh, only, I'm getting the wrong error here. Let's say, right line that thing. Let's go here. Let's right line the thing. I was not, we can do it easier. Let's just remove the semicolon here. So I'm getting the value of that. It's not a null reference exception. The null conditional operator, sorry, I botched it a little bit. Null conditional operator will look at the left-hand side of a dot or an indexing or something and if that's null, it just won't do the rest, it'll just return null instead, right? So it prevents you from getting a null reference exception, which is the error I should have gotten before when I did this thing here. You see, still get that. Come on. There it is. There we are. Object reference not set to an instance of an object. Okay. So null, so the null conditional operator kind of checks for null for you. So when you have all that code flow where you say, if that thing is not null and if that thing, the other thing is null and that thing is null and so on, you can kind of just compact all that back to saying first thing, question dot second thing, question index, third thing and then either that whole thing will be null if there was a null along the way or you actually get the value out and you can just check for null at the end. Okay. So that's C sharp 6, just a selection of features and you also saw the repl in action here. So whenever you want to try a new API or play with something, play with some idea before you put it into a project and make real code out of it, you can get really good kind of immediate response feel from using the rebel. So you really should, even though this is not a tooling talk, use the repel. Okay. Okay. Enough of that. So that was C sharp 6. Yeah, get on with it. C sharp 7. Okay. So I have a prototype here. Actually, you can get a pretty up to date prototype by downloading the preview to official studio 15. There's some black magic that you need to do in order to enable the new language features. There's a blog out there. You can search on a dotnet blog and get the tricks. We're looking into getting a better, we're actually pretty far along in setting up better tooling. You can install a v6 soon, a visual studio extension and visual studio that will help you download the latest. And then when you click send a smile or whatever, we automatically get your feedback on the new language features. And so we really, the more of you play with it and give us feedback saying, oh, this works in a stupid way or whatever, actually, we get really happy and you get the language you want because we listen to you. Okay. At least mostly. So C sharp 7. There's a bunch of features. Let me start with a small one just to get us warmed up here. So one odd thing at numbers, let's actually say in array, numbers equals. So we can put some numbers in there. If we, so just numbers in order, right? If I if I was teaching a kid or I in other ways wanted to very explicitly see the bits and understand what I was doing, I would want to use binary literals. So we actually added binary literals to C sharp. You can say 0B and you get your binary literals. Okay. Probably not super useful in most cases, but hey, sometimes you want to see that bit pattern. You put hex and you kind of have to remember E is, you know, what is E? It's 14. But what is it in a bit pattern? Okay. So we also have now, you know, these guys can get pretty long. So there's also now digit separators in there. That's the underscore there. So you can put it in wherever you want if you, you know, if you want to kind of group your binaries by 3s or your hex by 2s or bits, you know, whatever. You can do it. You can put as many in as you want. Actually, we don't, we don't judge. But, you know, just useful syntax stuff there. So that's quickly over with just showing it to you there because I needed a list of numbers anyway for my next trick. So actually what I want to do is I want to take these guys and I want to tally them. Okay. So I want to tally my numbers. And what do I mean by that? Well, I want to go through that list and I want to add them up and I also want to count them. Okay. I want to add them and I want to count them. And that's what my tally method does when I've written it because it's not there yet. So let's generate it. Let's not have the private there. So let's call these values. What? So I, I, I, I both, I'm going to end up with both a sum and a count here. Shoot it. What should I return? Should I return the sum? Or should I return the count? What do you think? Both. Yes. Right. That's the right answer. Both. I'm just going to return an end and an end. Right? Yeah. Cool. See? So I haven't actually done it yet. Let me do it. Return. Return an end and an end. This only works for zero but it's still pretty good. Cool. So, so this is a tuple type. All right. It's a, it's a tuple type. That's a new thing. Language supported in C sharp. It means an end and an end here in this case. You could put out the types there too. Trust me. So, and yet they can be longer. Actually, in this particular version, they can be seven long because, so there's underlying types that, that are, you know, and they only go up to seven. But when, when the compiler gets a little smarter, it will, there's actually an eighth one that has an extra field in it called rest. And when the compiler gets a little smarter in, in, when, when we finish this feature, it will, it will just put, recursively put more tuples inside of the other ones to emulate. If you have a 19 tuple, it will be transparent from the language level and the underlying types will just be nested. So, tuples of arbitrary side, but if you have a 19 tuple in a return, you're probably in a bad, in a bad place anyway. Okay. Let's see. So what happens? What can I do with my tuple? So now you see I get an income in back here. Great. Okay. What can I do with it? Let's use a write line and write something out interpolate a string. You know what that is now. So we, let's say that the first one is the sum. So we put that in there. Say t dot. What's there? Well, this item one and item two. Okay. I can probably guess what those are. Item one is probably if, let's say it's got on a limb here and say it's this, it's the first one. So we've got item two here as well and this is a way to get at these guys. They are names, but they're not really nice names. So let's give them better names actually. Let's put some names in our tuple type. Okay. This one's the sum. This one's the count. Now I don't have to worry about that and read or read the comments or something outrageous like that. It actually says right there. These, this is still allowed. The underlying names are still there, but now if I, if I dot, you see that the good names are also there. So I can dot in for the sum and I can dot in for the count. It means the same thing as item one and item two. Okay. So sum and count and all of a sudden, you know, it gets kind of readable. So that's cool. I know some people have used tuples before and they'll be like, you know, especially if you, if you tinkered with a functional language and you got in kind of arrogant like those guys, you'd be like, yeah, but what about deconstruction? And I'm going to, so this is my, this is my answer. We'll get there, but we're not quite there yet. Yeah. Take that. So, so it's going to be possible in a later, when actually we ship this to say something like s, c here, like you're saying var, a couple of variables and d and the, the, the tuple that comes back immediately gets split back out into multiple variables and you can just use those s and c variables in the following statements here. So that's, that's also in the works. But we think both kinds of access are useful and important. So we're going to, we're going to allow both. Okay. So far, so good. Let's actually go and implement this thing for real. So let's take this tuple and return the result instead and then declare the result to be this tuple. That's a good start. And now we can, we can accumulate into this tuple variable. So let's say for each of those numbers there, for each var v and values, do something. And the obvious and probably the most beautiful thing to do is to say r equals a new tuple created from the old one. So actually, let's give this tuple, this tuple doesn't have names yet. Let's give it some better names. So you can also do that in, in, in, in tuple literals. It looks like when you have named parameters. In general, you, you kind of notice, I'm just going on a tangent here. Notice this looks kind of like a parameter list. All right. So we're really trying to get a, a symmetry in, in the syntax here. So it just feels like a parameter list going the other way. When you look at the method, you can see the list of what goes out and the list of what comes in. Okay. And similarly for the, for the literals, they kind of look like argument lists. You heard the difference between parameters and arguments at the party last night, right? So arguments, they're the things coming in and the parameters are where they come in. Let's get that straight. Okay. But anyway, so this, so now I gave these names and so that means that R now has an S and a, and a C. Actually also notice, I keep interrupting myself. And notice that I'm returning an R that has different names than the return type has. And we totally don't care about that. We don't want to be rigid about, oh, the names have to match or something like that. Like the names are just there to help you. They're not there to, to create artificial barriers between, tuples are just defined by the types and the positions. And the names are just a help. Okay. So I can say R dot S here plus the value that came in and R dot C the count plus one. And then I, and I just updated my, my tuple variable here with a new value. So is this going to lead to a bunch of little micro allocations that's going to make my, my app here less performant? No, because tuples are actually structs. Okay. So they're not, they're not classes. Objects don't get allocated when you create a tuple. They're just structs. So when you're, so when I return a tuple, I don't have to worry about, oh, I'm allocating an object. Should I return a tuple or is that too expensive? Should I use our parameters? It's not expensive. It's just a struct on the stack. It's, it's no more expensive than passing parameters. Okay. Just values, the values are right there on the stack. So don't worry about that. Another, another nice thing about it being a, being a value type is that they are values. They get copied whenever they get passed. They don't get, you know, there's no, so you don't get any shared state through tuples, which means we can make them mutable without worrying about concurrency. So they are actually, tuples are actually mutable. And we get a lot, we get a lot of crap from the functional crowd over this, but, but tuples are mutable and C sharp because why not? That's not, there's no danger in this. Okay. Trust me. So I can, instead of doing it like this, I could actually go and say r dot s plus equals the value like that. It's fine. I can even say r dot c plus plus, which is not a reference to c plus plus. It's just, so I'm, I'm, I'm incrementing the individual fields of the, their actual fields. I could even take a ref to them actually. They're fields. Just a public fields, public mutable fields. So retro. But, but that's actually just, you know, it's just, there's no magic, right? It's not some encapsulated data structure. It's just your stuff right there, open and accessible and you can do whatever you want with it. Okay. Enough of that. Let's go back to the more beautiful implementation though since this kind of hurts my eyes a little bit to be honest. Okay. And go. I should have deleted this in one go. Then I wouldn't have so much undoing to do. There we go. Okay. So that's tuples. One more thing, oh, one more thing to say about them. This really is a type, right? So it can be used as a type argument and wherever you use a type. So for instance, for async and csharp, which you all know, right? Yeah. I can, I can return a task of a tuple which, you know, async methods, it's even more bothersome to return multiple values from async methods because you can't have out parameters, right? But now it's not bothersome anymore. I just returned a task of this tuple type and then when I consume it, I can await that task and I get a tuple out. Simple as that. The red squiggle is because it's not an async method. It actually does work. So that's useful. And another thing to know about tuples is that they kind of have, they have value semantics. Like the hash code and equality between tuples is value-based. It's based on what's inside, right? Which is typically the case for structs. And that means that you can use tuples as a key in a dictionary, for instance, okay? So if you have multiple keys, if you want to key by name and age, you can do that just by having the, a normal dictionary where you instantiate the generic type with the tuple type as the key, right? Or if you want to have multiple things come out when you, when you look up in a dictionary, you just have tuples as the value type, as the values of the tuple, of the dictionary, right? So they just work right for all those things. And I think, so I think another good scenario for tuples is to, is for them to be used in data structures in various places, okay? Good. Yeah. Going to use? Cool. It's so great getting applause for things that were in languages 30 years ago and we only finally got to it. But I'll take it. Okay. Another thing, pattern matching. Okay. So let's change the example a little bit and it's a little contrived, but just bear with me. Small examples, easier on the stage. So let's say that we actually want to, our tally method to take recursive lists, okay? So we're going to represent those by arrays that can contain either integers or other arrays that can contain either integers or other arrays. And so we're going to make it recursive. And I'm just going to take a total ugly shortcut here and just make it an object array, okay? So I can contain both those things. So it's still fine to put all integers in there, but we could also put a nested array in there saying new object array of some of these guys here. And just to make it a little more interesting, let's also put a null in there since it's now an object array. And now let's try to, now of course tally doesn't work anymore because it takes an int array, so we're going to change it to take an object array. But when we do that, of course, this bit doesn't work anymore because V, as we free each over the objects, is no longer known to be an int. Okay. So how can we implement this method to do the right thing? Well, what we would do today is something, we have a few options, but something along the order of check if V is int, do this, do that thing down there. But of course we can't just do it because even though we just checked that V is int, it doesn't know that V is int down here, so we kind of have to say it again, oh, and if it's still int, you know, do the addition. So we can do better than that and we are. So the, think of this as just a small trick right now. If I put in, if I declare a variable here, put a variable name there, what that means is I'm introducing a new variable which is V as the int that we just checked it was. So if the is expression is true, then the i gets assigned the value of V as an int and now it's known to be an int and I can just use it down here. Okay. Kind of simple but useful. You all have code that does this, right? Except the two that don't use C-sharp, you have similar code in Java and you wish you had this now. So that's kind of cool, but what this really is, it's not just like an expansion of the is expression. It's actually, we're kind of edging into something new in C-sharp. This int i here is really a new kind of thing in the language which is typically called a pattern. Like other languages call this a pattern. What a pattern is, it's essentially, it's a combination, it's something declarative that both checks something about a value, whether it's true or false, and it also extracts some information. It can do either or both, extract some information from that value if the check is true. Okay. So this is a clean little example of that. It checks that it's an int and then it extracts the int value from it into a new variable. Okay. That's what patterns do and there can be other patterns and there will be other patterns in C-sharp. Just probably not in C-sharp 7. But this is the beginning of a pattern and you can use patterns in different places. So let's say that we want to also now deal with the case where v is not an int but it's an object array. Like we could go on here and say else if, else if and then we say v is object array, oh and so on. And now we'll be checking things about, we'll be saying v is a lot of times. Okay. So we are asking questions of v all the time and there should be a better way of doing that than repeating ourselves so much. And another place where we're integrating patterns into the existing language. This is that thing about keeping the spirit of C-sharp while adding new things is the switch statement. So we can switch on v and in C-sharp up until now it hasn't been legal to switch on just anything. You can only switch on primitive types and stuff like that. Primitive types and strings. You can now switch on anything you like. Okay. So you see it's not an error to switch on v here. And then I can have cases. I can have cases, I can still say case five, but I can also now put, let's say int i. Okay. And in intelligence, doesn't quite work yet with these new features. Case int i. And so I'm essentially applying the pattern here again. And why am I getting a red squiggly? It's not because this isn't implemented yet. It's just because I forgot to put a break. So there we are. Let's indent this whole thing. So I can actually now put patterns in cases. Right? So I now have type switching. I can say switch on this thing in the case where it's the int, where it's an int, call it i and do this thing here. So now let's do the same thing for object array. Let's call it l for list. I can now implement it for the object array that's called l. What we want to do is we want to recursively, let's get the nested result, let's call it var n, by calling tally on that list and get its numbers and then we add them into ours. So let's say s, r equals r dot s plus the thing that just came back. So n is now a tuple that was returned by tally called recursively. So it has a sum in there and we do r dot c plus the count from there. Okay? You get the idea. So now we fixed that one as well and we also need to put a break. We haven't fixed that yet. Sorry. We talk about it. Do you think we should take the break away here or just let you do? Yeah? Okay. Yeah, a lot of people say that. Who thinks we shouldn't? She thinks we shouldn't. Okay, a few. Yeah, right? Because then if we, let's say we had another case here. Now, you know, I say case five. Now it's fall through. Now it isn't. You know? So it's kind of subtlety if we do that. But anyway, that's totally, that's a side track. But, okay. So I can, so I can put patterns in there. Let's say just for the sake of the, of the example, it's a little silly, but let's say we don't want to, we don't want to do this recursive calling and stuff. If that array is empty anyway, it could be an empty array. So I can say, I can put another case there. Now it's complaining that I have two of the same. But I can put an extra condition on there. So I can say when l dot length is zero, I want to do nothing. I just want to break. Okay? Because I don't, I don't want to go into an empty array. Okay? So you can also put conditions in your case statements now with the when clause here. What's the question? Does the order matter? So, yes. So the thing about the switch statement up until now is that it's, it doesn't have a concept of evaluation order. Because all the cases, when you have constant cases and default, when they're the only things you have, they're all independent, right? Now we have to introduce an evaluation order. And so they're evaluated from top to bottom except the default wherever it occurs is always evaluated last. So now there's a top to bottom evaluation order in switches that wasn't there before, didn't need to be there before. So that, so in this case here, it will get to this one first. And therefore, we know that when we get here, well, you know at least that l dot length is not zero. It's just one of those things. It's kind of, it's similar to catch clauses, right? They also do a type check in order and they also, since C sharp 6, I didn't show that, but since C sharp 6, you can also put conditions on those in a catch. You can actually say when and some condition and that will, and then we'll catch only when that condition is true. So it's very, very similar. We try to keep it similar like that. Okay? What else can we say? Oh, yeah, there was that null up there. You still have the constant cases. You saw that before as well. Yes, I can say case null. And I could also put a default that throws an exception. It's still the good old case statement that's just been extended now with more permissive types here and with more things you can do in the case clauses. Okay? So that's sort of the beginning of pattern matching. It's type matching really that's coming into C sharp 7. You see yourself using that as well? Yeah? Cool. Good. Then we're doing the right thing. All right. So I think I want to show one more thing in demo and then we go back to slides. Let's remove this curly brace here. What I just did was I took the tally method and I put it inside of the proceeding method. Okay? And that's getting me a surprisingly small amount of red here. So let's delete the static here now. What I have now is a local method. So we're now allowing you to declare methods inside of other methods. The other red that I'm getting down here is that I'm missing a curly brace somewhere. There we go. Oh, and then I have a little bit of red up here saying that because tally is a local method, just like local variables, it can't be used before it's declared. It's just complaining that I'm using it before I'm declaring it. So I kind of have to swap the order around here. I can take these couple of lines and move them down to the end. If I can find the end. Boop, boop, boop, boop. It's probably there, right? I hope so. Yeah, you see now all the red goes away. So I've just declared it as a helper method inside of another method. So useful sometimes if you're, you know the feeling of declaring a helper method and it's pr-, you make it private but you really don't want the other methods to take dependency on it. Sometimes like I'm building an iterator or an async method and I don't actually want it to be an iterator or async. The iterator, I want to check my arguments eagerly so I'm, so I kind of have a wrapper method that's a public one and then it calls the iterator method. Now you can put the iterator method inside. And they share, you know, I carefully didn't use any same names but they, but then of course the names up here, numbers, is in scope down here. If I, if I set numbers here instead it would work the wrong way but because, you know, it'd keep going on the same way when I call recursively but you can see that it's totally in scope and it's totally allowed, right? It's just like lambdas. It captures in closing variables and so on. So, and, you know, some, some languages like JavaScript and some, and other languages are also traditionally like Pascal and so on. It's been actually a structuring mechanism for code to put, to put methods inside, methods inside, methods or procedures inside, procedures inside, procedures. Some people write the JavaScript code like that today and use that as a, as a scope, you know, the main scoping mechanism, main composition mechanism and, and we're kind of getting to where if, if that's how you like, how, if that's how you roll, you can, you can do that in C-sharp. So, and that may not be the most common case but I think that just the, the helper method thing is useful, okay? And it's also more efficient than lambdas. There's no allocation involved. So that's it for demos. Let's go back to slides and talk a little bit more about the future. There are a few more things that I'm not going to get to that are in C-sharp 7 as well. But as I said, we're kind of upping the cadence of the language and so we're getting, we're trying to get comfortable with not just, you know, not just starting any version of the language with a blank slate thing down saying, okay, what should we do this time? And then we have a three-year thing to kind of get it all to sign and implement it and ship and then we go blank slate again. We're trying to kind of work on multiple trains at a time. Well, this one is a little more difficult to figure out. Let's, let's start working on it but not put it in, but not expected to be in the following release. We have to get better at that. Like everyone else, like we kindly have, we, we finally have to wake up to the reality that you can have multiple trains and you can work on them at the same time and the train leaves the station. You, you just get the train when, when you're ready, whichever train leaves the station then, okay? So I mentioned that there will be more kinds of patterns and I don't think they will make it into C-sharp 7 but here's a, let's actually project and get the full size of it. So this is what's there, right? You can check that an object is a point. It's called a P and P is now in scope and you, and you know it's a point so you can dot into its X. Okay, great. But I could also imagine doing this in different ways. Maybe there's a property pattern that digs in recursively and say, if O is a point then take the X and call it and put it into a fresh variable, extract information, right? So check and extract. Those are the, those are the, the, the jobs of a pattern. So extract that into a variable called X and a fresh variable called X and extract the Y into a fresh variable called Y and then those are in scope in the following thing. Or you can imagine that point, that's a more terse positional way of extracting things. Point says, when you, when you deconstruct me, like we talked about deconstruction for tools before, maybe point says I can be deconstructed and when you do the X's first and the Y's second and I can just use positions here to say, okay, I'm extracting whatever was capital X because, because it's whatever's in the first position I'm extracting that into VARX and that into VARY. And of course, you already, so in, in case clauses you already have constants. So constants, I went a step too far, constants are sort of already patterns. And so you can, instead of taking VARX and then checking that it's equals to five, maybe you can just put a five there recursively and say, you know, this whole pattern only applies if O is a point where X is five and then take the Y and put it into a variable and do something with it. So we have much further we can go there if we want to and, and, and if we can get it right. Another thing, C sharp for now soon seven releases has not had a good sort of static compile time way of dealing with nulls, right? You test, you get null reference exception, you go bummer, you fix your code, hopefully you swatted all of them before it ships. This is one of those things where we're starting to look a little long in the tooth here, a little old, compared to some of the, the fancy new languages, they're probably going to die out in a couple of years like, you know, but, you know, they may flies, but they still move the state of the art and we kind of have to keep an eye on that. So, you know, a language like Swift for instance, I'm not actually going to call Swift a mayfly, it's an awesome language. And they have a very deliberate handling of null, like you don't, you can distinguish whether things can be null or not. We want something like that as well, but for seven releases we haven't had it. And so you can't just go and put a breaking change in the language and change everything. So that said, here's the kind of thing that we're thinking of. One thing we can never do is to build a feature that has, that's 100% guaranteed that that can never be a null. Like that, that, that ship has sailed for sure. There are all kinds of reasons I'm not going to give you, but, but you can ask me later. But we do want you to be able to express your intent when you declare a variable or declare anything with a type. You know, we, we expect the syntax will be saying, this is actually supposed to sometimes be null. Okay? N is supposed to sometimes be null. It's part of its domain. However, S here, not having a question mark, it's actually not supposed to be null. Once it's initialized and everything, it should never be null anymore. And then given those two things, we can try to enforce safe practices when programming around these. First step, express your intent. Be able to express your intent. Once you have syntax for that, then we can help you with warnings or maybe even errors and, and help you do the right thing. Okay. So sure, I can assign null to N, but if I assign null to S, I'll get a warning. A, A, A, S should not be null. Don't do this. And of course, indirectly as well. Don't assign a nullable to a non-nullable without a warning. On the other hand, what you can do with S without fear is you can start dotting into it. You can dereference it. Okay? Because S is not supposed to be null. So, of course, there might be sort of edge cases where it still is null or something. The default value of any reference type in C-sharp is null. So that's why I'm saying it's not 100% guaranteed. But this is going to help you have a good discipline if you're not doing weird, quirky, edgy stuff. So it's fine dereference it. It's not supposed to be null. But if you dereference N, we're going to say, oh, you're dereferencing something that is supposed to sometimes be null. It says so in its declaration. So you're probably, you probably have a bug right here. Of course, in order to enforce that, we need to be able to recognize when you're checking for null. Now, when languages build this in from version one, they just introduce a specific way of checking for null. That the compiler then, that typically it introduce a new variable or something for that. We can't introduce a new specific way of checking for null. There's already like seven ways of checking for null that I can think of in C-sharp. So we have to instead recognize through flow analysis of your source code that we think you checked for null. Okay, so something like this, if this is your code, maybe you have the code today and you just went and put question marks in there in order to improve your checking, we want, we need to recognize that you've checked for null here. The compiler knows that even though it's declared to be nullable, in here it's not. Okay, so it's not going to complain about this. It saw your check, it's fine. Okay, so flow analysis is sort of a way of not introducing yet another way of checking for null. Of course, sometimes you know it's not null because you know that whenever that enum over there was, was green, you know, that other thing over here is never null. You just know that. It's part of your invariant. And in those cases, you get to insist, right? So we're introducing the bang operator. We should probably call it something else. The exclamation mark operator, which, you know, n exclamation mark means trust me, it's not null. If it tells the compiler, trust me. And we could call it the trust me operator. We're actually more inclined to call it the dammit operator. Do what I say, don't complain. Okay, so that's, I think that's a very, very likely C sharp eight feature. And maybe even one of those that we can do in a point release before that. Sometimes, and this is again coming in from sort of other paradigms, sometimes you want to work with, you, sometimes the language and its roots are perfect. You are object-oriented, stateful, imperative, it's all great. Sometimes it's just fighting your every step of the way. Okay? I want to have my object be immutable. I want to have them have value semantics so that they are, you know, when you compare them equal, it's by value not by reference. And we are not very good at supporting immutable objects in C sharp. We've gotten a little better. But it would be nice if we were even better. And one of the things we can do is make it easier to declare a class that says, hey, this guy is just a data object and you're not supposed to mutate it. You're supposed to, you're supposed to create new ones if you, you know, create new ones from old ones if you want to change something. You want to use that immutable programming, non-destructive mutation kind of paradigm, sometimes just the right way of working with data. And so we're thinking about adding syntax, maybe it's going to look like this, where in a very concise form, you're declaring a class and then we're putting these, you know, just parens and some types and names. We're declaring a class that has properties, public properties first and last of type string that are immutable and the whole class has value equality and it's automatically got a deconstructor like we talked about before with point and it's sort of just like a data object, if you will. So essentially writing that essentially keeps you from having to write all that stuff. So something like that, we call it record types, might be the right kind of abbreviation to introduce into the language to level the playing field between imperative mutation oriented programming and functional style immutable programming. So that's also on the, that's also on the agenda for a post-T-Shop 7 language. Okay. So we have a few minutes for questions. I hope you have some. And I also want to say that I'm going to go down to the Microsoft booth in the expert hall right after. So if you have more questions, just want to chat, I'm going to hang out there. I'm going to be there for as long as people show up to chat. Also, don't forget to press one of those buttons out there. I hear that the red is not working. So don't bother pressing it. But the other ones, especially the green ones, really good. This is my Twitter handle, tweet at me. And that's what I have. Thank you very much. And then I'm going to, thank you. So we have, I'd like two minutes left. So I'm going to hear if there are any questions. And I, maybe you should shout out because I may not be able to see your hand. Questions? Go. Don't be shy. Please, please don't be shy. Don't be shy, Norwegians. There. How would you use a rebel to call into your own code? So you can use, you can, the rebel by default isn't tied to a project, but you can use it to, you can use it to, simply, you can simply reference things from the rebel. It actually has a pound reference directive inside where you can reference an assembly. And that would be a way to do it. Yeah. And I, I guess you probably, the rebel, there's still, let's say there's a bright future for the rebel. Another way of saying it is, it doesn't have all the functionality it probably should yet. Seeding a rebel from a project, I think is not there yet, but that's a, that's a, so just automatically, you're in a project, you just say rebel from here or something. And it has all that context already to play with. I'm not sure if it's in yet actually or not in update three. It might, hopefully it's on its way. So that's one of the things we have on the, on the docket there. Another thing that we imagine for the future is to unify the rebel and the, the immediate window in the, in the debugger so that you can use the rebel as you're debugging. You can already, you already have an expression evaluated. You can already write an expression and have it evaluated in, in context of where you stopped in the, in the debug session. It'd be great if you could also, you know, declare some helper methods, maybe pull them in from script into the, a rebel window and maybe declare them on the fly there and use those as you're debugging and, and, and kind of looking around. So there's more that can be done and, and hopefully we'll get there. Otherwise, did I mention we're open source. So if you want to add any improvements, you know, go ahead. There's more to do than we can possibly do. So you can actually help vote with your feet. If you do something, that's great. We'll take it. Okay. Next question. We can take another one. There's one up there. I can totally not hear what you're saying. Could you pre-declare inline methods? What do you mean pre-declare? Oh, that, oh, so that you can see them before, declare them up top and define them down bottom? Well, that's a good question. So we started out cautiously here in the prototype. I think we still need to think a little bit more about it before we ship, saying that this, they have the same scope rules as local variables. The thing we have to worry about is that they, that they can see local variables. And so if they can be, if they can be called before they're declared, then you can call to something that can see a local variable that hasn't been initialized yet where you call it from. So there's stuff like that, that's, that, that we need to figure out. But we have talked about, I mean, another problem, so one problem is it's kind of, it kind of gets rigid where you have to declare them. Another problem is you can't have mutually recursive methods because the, the first one can't see the second one. The only way to have mutually recursive methods is to nest them inside of each other because then they can both see each other, which is absurd. So, so there's something that isn't quite there yet. We may ship like this in C-sharp seven if we don't, and then we can, we can loosen the rules later. That might be one way of, of attacking it. Okay. I see that we are now on time. So thanks again and come see me at the Microsoft booth. Thank you.
C# can be developed and run on more and more platforms, and thanks to the “Roslyn” language engine you can increasingly make your own tooling for it. C# 7 is set to embrace several new features for working better with data, such as pattern matching, tuples and records. Come see what’s in store for C#!
10.5446/51708 (DOI)
All right. So this is Continuous Integration for Open Source Projects with Travis CI. My name is Kyle Tyak. I'm an API evangelist with Akamai. I spend my days working with our clients to streamline their workflows and to get up speed with our open APIs. So I spend a lot of time with HTTP requests and debugging and fun stuff like that. When I'm not working on the API stuff, I like to dabble in Node.js. I do a lot of full stack developments in a former life. I spent 10 or 12 years freelancing and working heavily in Flash Development Action Script. So JavaScript is sort of a second language for me. It makes sense. It's very similar to Action Script. When I'm not coding or working with clients and traveling and speaking at conferences, I spend a lot of my time brewing beer. I've got a bit of a problem with it. It's become a bit of an obsession of mine. So you can pretty much always find me talking to somebody about beer. If beer is an interest of yours, pick my brain after the session. It's gotten kind of the point where I'll talk to just about anybody about beer. For the record, turkeys don't drink beer despite my best efforts. I thought he might taste better if he had a few of these, but he didn't bite. So I'm going to tell you guys a little story today. And this is a story about obsession, about love, about heartache, pain and passion, conflict, about late nights and early mornings, about a deep and driving desire to share a part of oneself. Of course, I'm talking about the management of open source projects. So my story started a little bit strangely. I had just joined Akamai. I wasn't too familiar with some of the projects we were working on, but being the node guy on the team, I was tasked with creating a signing library for our authentication systems in Node. As it turned out, one of our community participants, the fellow developer, had created a node library for a client he was working for and had open sourced it on GitHub. I reached out to him and said, hey, you know, you've done something great. We'd love to use this internally on our projects. How do you feel about sharing it with us, letting us host it on Akamai? And he said, sure. So that was great. I didn't have to rewrite the entire library. I just needed a little bit of cleaning up. I forked the project. I did some cleanups. And this was kind of my first time working with an open source project. I didn't really know exactly what I was doing. So the first thing I did was clean up all this code. I wanted to make everything nice and uniform, have the same tab spacings, have semi-colons in the same places, second line returns for the closing brackets, all that stuff. So I did that. I committed it back. I bumped the version up to 2.0, not realizing that that would be a breaking version. And I pushed everything back up to GitHub. About 30 minutes later, I got an email from the developer saying, what did you do? I can't commit anything. I can't merge. There's conflicts everywhere. What's the problem? So we stopped and talked about it for a little while and figured out that by me performing that stylistic update and changing all the formatting on all the files had made so many commit changes that he could no longer commit and merge into that code base. So we started talking about ways to handle this going forward and how we could kind of solve this problem of enforcing stylistic changes in uniformity across the entire code base. This is something I wasn't familiar with, but it's a process called linting. Basically, that's how you enforce stylistic changes. So if you want to have four spaces, mean a tab, or if you want to enforce semi-colons being at the end of every line, whatever it is you might require for style guidelines in your project, you can enforce that with a process called linting. And there's tools out there for every language for JavaScript and Node. There's a thing called JSCS, which I'll show you guys a little bit later, that actually enforces that stuff. So he recommended I did this and added it to the project. But the thing is I didn't know how to enforce things that people were committing on GitHub and ensuring that those changes happen. So if somebody else in the audience here wrote a change and committed it up through a pull request, how do I ensure that you're following the guidelines that we set forth? And that's what turned me on to this thing called Travis. He recommended I check it out. It's a management for continuous integration service. And basically what it does is allow you to run a build every time somebody commits code, whether that be through a pull request or through a normal commit. However, the code is getting into the repository, you can actually run a build and testing and linting and everything else across the project. So I implemented it, and on doing that, I was able to enforce these stylistic guidelines. So every time somebody committed a code, we were able to then check it and say, are they following the guidelines that we've set forth? How do we have this in place the first time when I got my hands on it? It would have been telling me right away as a developer, hey, you've broken this thing, it's not going to work for everybody. You're not following the guidelines. You need to make some changes before you're allowed to commit this. And so that's kind of what turned me on to Travis and the continuous integration idea in general. Another story that might be a bit more familiar, how many people here have created an open source project themselves? A couple of you? Okay. So the new GitHub project, this is the start of everything. You create a piece of software or something you want to share with the world. It's your baby, you're excited about it. You post it up on GitHub. And after a while, hopefully, people issue some pull requests. This is great. People like what you're doing enough that they're willing to write some code against it. It's fantastic. Everything is going good. And then you have to merge. It's not a big deal usually. It says you can merge. There's no conflicts. You merge things in, you move along. But eventually, you get this, the faded merge conflict. Not the end of the world again, but kind of a pain. You have to pull everything down. You have to check through another developer's code. You have to figure out where the conflicts lie, reformat, fix things up. And then merge everything back up. This takes a lot of time. And that's something that we all have very little of. What happens a lot of times in open source projects is something like this. You end up with this pile of stale pull requests. I see this all the time in projects that I follow. There's 20, 30 pull requests just sitting there. Some of them, six, eight months old. And to me, the reason behind this is that people just don't have time to handle all the merging of conflicts and everything. So the idea that I'm kind of putting forth here is that by using tools like continuous integration and Travis CI, you can sort of ease the process of working with open source projects and merging in all this code. And by doing things like linting and testing your builds as code gets committed, you're actually saving yourself a lot of time, which hopefully prevents things like this from happening. So let's take a look at how this all works. Let's first look at continuous integration. So the definition as put forth by ThoughtWorks, a company I highly recommend you follow if you're interested in CI, a guy named Martin Fowler is kind of the guru on this stuff who I studied quite a bit when I was preparing for this talk. Their definition is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build allowing teams to detect problems early. There's a lot of information there that's explaining what's going on, but let's break it down a little bit. A shared repository, it's basically for us for all intents and purposes, a GitHub repository. Now this could be any sort of version control system. For Travis specifically that we're talking about here, it only works with GitHub, so we'll be focusing on that strictly. But the idea of sharing the code obviously needs to be in place, otherwise there's really not much point to this whole talk. So you've got your repository up. One thing you want to make sure of with continuous integration too is that everything that's needed to build that project exists in the repository. I don't know how many times I've seen a project where I'd go to a client and take something over or go to an open source project, and I check out the code and I try to do a build and it doesn't work because there's six libraries missing or some configuration file or something else that's not included in that library. So one of the things to remember when you're working with continuous integration is that any developer should be able to check out that project and build it immediately, have everything they need right in one place. Several times a day. Now this is kind of an odd concept for a lot of developers. Has anybody here heard of the concept of a release party? Maybe this is a state space thing. It's this really sick and kind of twisted idea that you'd have this little party to release a new code distribution. So in standard profiles you'd work on code say two weeks, three weeks, maybe a month, two months sometimes, and you'd work on these builds doing commits all along the way and you get this massive merge at the end of the day. So two, three weeks into the process you have this release party, you take all these merges and commits that have been processed over the last few weeks and you try to merge everything back into the master branch. My first start-up job out of college we did this on a regular basis and I remember one in particular release party that we had that lasted for three days. So you've got this whole team of engineers trying to merge all this code in over the process of three days and what you end up doing almost always at the end of it is just rolling everything back and then spending another three weeks fixing all the problems. So this idea of committing several times a day is kind of different than your standard approach of these long, you know, processes of merging in. The idea though is that it breaks your code down into smaller snippets, right? So if you're having to commit at least once a day or more times a day than once, you have to break things down into much smaller pieces, right? Because you obviously can't write 12,000 lines of code if you're only committing, you know, in one day. So this has a couple of benefits, one of which is that it makes your code easier to debug, right? If you're committing the day that you made the code and made, wrote the actual code that you're committing, you're going to have an easier time finding any bugs that might arise because you just wrote the code, right? You're not coming back to it six months later or three months later and trying to figure out what you were thinking back at that time. So committing on a regular basis has the added benefit of making it easier to debug. It also makes it have a sense of progress, right? So I don't know how many times I've worked on a project where I'd be working on code for say two, three weeks and I'd be just slogging along, you know, at the end of it I get to commit, right? I feel like I finally achieved something. But committing on a daily basis, it gives a sense of progress to the project and it can actually make it seem a lot more fun. So being verified, that's the build process part. This is the added value of making sure that anything that's actually making it into the branch is a merge, is been verified by your build process. So whether you run unit tests or you run a linter or you do a version bump or any checks that you might need to do during that build process, you're verifying that any code that's actually making it into the branch is valid, right? In terms of whatever your build requires. So that is the added benefit of being able to constantly have a branch, whether it's master or whatever branch you use that can be released to production. So automated build, obviously you have to have a way to automate and there's a ton of systems available out there for us. It's going to be Travis CI that we're talking about today, but a lot of people use Jenkins or Team City for similar processes. I'll talk a little bit about why I like Travis for open source projects specifically here in a little bit. And detecting problems early. Again, that's the benefit of being able to debug right when you write the code. It's a lot easier to detect something and figure it out than it is going back a few weeks later. So the basic process of continuous integration looks something like this. You write your code, you make a commit, it goes into the source control, it triggers a build, right? For us, it's initiating the continuous integration process on Travis. Travis is going to spin up a virtual environment, actually run your build process. Any tests and anything else you've set up in that build is going to be run. It's going to then issue a report back and for us, it's going to go back to GitHub and say, hey, this is what's happened with this build. It's failed, it's going to let you know, if it's succeeded, it's going to let you know. And what that does in terms of open source projects specifically is to give you the ability to see on the fly whether or not a project is able to be merged in, saving you the pain of having to dig through the code and validate everything by hand, spending a bunch of time. So it really speeds up the process and it makes you much more confident in the things that are going into your code base. So Travis CI is a hosted continuous integration service that's free for open source projects. They do have an option available for enterprise now, but it's a paid version. So their big push and where they made their name was really in the open source world. And that's why if you start looking for Travis in projects, you'll see a little travis.yaml file. You'll see that in a lot of the open source projects that you might use today. After I started using Travis, I noticed it in pretty much every project that I used on a regular basis. So to look at this a little closer, it's a hosted service. Now this is the big value add for me. It's also free, which we'll talk about again in a second, but the hosted's hard as the biggest piece for me. So with something like Jenkins or team city, you have to spend a ton of time actually spinning up a server and configuring it and maintaining it. With Travis, they host the entire service for you. So instead of spending your time actually hosting a server, configuring all the software that's on it, upgrading the packages, upgrading the modules, upgrading anything that you run on that server, they take care of all that stuff for you. So all you have to worry about is writing your code. And again, for open source projects, time is of the essence. So being able to save time not having to host a server is a big win right away. And it's also free, as I mentioned. So for open source projects, it doesn't cost a dime. You can literally set it up in about 10 minutes and be up and running. Some other benefits of Travis, it's open source. So if you want to contribute back to the project, if you find a bug or if you find an issue that requires some new development that you want to contribute to, you can actually issue pull request back to the project. It's also multi-language. It supports, I think, 29 languages at last check. It's got support for pull requests, which is really interesting and helpful for open source projects because you can actually trigger builds based on pull requests. So most continuous integration stacks only work on commit, right, when you're actually committing code into a branch. But being able to trigger via pull request lets you monitor and identify which merges are able to, or which commits are able to go in very quickly. It's also got simplified deployment processes, which we'll take a look at. They've actually worked with companies like Heroku, Amazon S3, and NPM, PiPi, language-dependent stuff, but they've worked with them to do integrations that simplify the deployment process. So after your build actually happens, it's been green lit, you can actually immediately deploy out to, say, Amazon S3 or Heroku and get your project updated. And it's really easy to use, as we'll see here in a minute. Again, on the language support, just a couple of the languages that I've worked with in Travis, Python, Node.js, PHP, Go, Android, they don't offer.NET support yet. I know this is a big.NET conference, so I wanted to mention that. In talking with a good friend today, he did mention that there is support for the core system that's coming up for.NET, that that runs on a Linux environment, and given that Travis runs on Linux boxes, you should be able to set that up to run as well. I also do know that they are working on adding, you know, Windows support and.NET support as an out-of-the-box solution. But to date, there's 29 languages that work, so just about anything you're writing in, you should be able to support with Travis. So adding Travis to your project. I created a simple little Node.js project here, and again, I figured Node was pretty easy for everybody to understand. It's also something I work with on a regular basis, but the idea to take away here is that this works for any language the same way. So whether you're working in PHP or Python or Node, it's going to be the same set of steps to get us all configured. So just to give you a quick idea of what the project looks like, we've got a simple hello.js app that does nothing more than have a function called say hello, and it says hello. We also have a single test, because we weren't testing anything. There isn't really much point to having a whole build. And this is just tests that we're actually saying hello to the correct name. So in this case, we're going to say hello to Dave, and it's going to check that we said hello, Dave. Very simple stuff. So if I go and run our tests, I'm using a thing called Mocha to run the test. If I type Mocha, you can see that all our tests are passing, right? So far, so good. Now, to add Travis to this, we're just going to add a single file called travis.yaml. Is everybody familiar with yaml? It stands for yaml.ain't another markup language, which I still haven't really gotten the joke, because it kind of is a markup language. It's just a way of identifying a config, similar to an RC file or anything else. But the basic idea is that you have keywords, and then you have, you know, colons for values. If you're doing a list, you can add, you know, item two, item three, like that to add a list of items. So if you're going to support multiple languages, which Travis does offer, you could add a list like Node.js and PHP, and it would actually build out of matrices where it would run builds across all of those languages. But the very basic bare bones Travis file that does nothing else than run the language and the build that it's going to use and run the build with some simple testing looks like this. We've got a language identified by Node.js, and this is actually a keyword that they have on their, in their documentation. And then the versions get specified down here. So again, if I wanted to run multiple versions of Node.js to test across different versions, I could do like 4.7, and stable is just the latest version of that build. So it's going to pull down the latest version of whatever stable build is for Node.js. If I ran 4.7, it would do a separate build process against the 4.7 environment, and it would let me know how each of them succeeded or failed. So this is really useful if you're, you know, having to test multiple versions for your project. And then finally, it's just going to run the script tag, and the script is basically our build. So everything you put under the script tag is what's going to run when it builds your project. For us, it's just running that same Mocha tag that we ran there. And that's going to run our unit tests. So if we kind of look back here, I'm going to full screen this again. You'll see here, I don't know if that's, is that showing up okay for you guys? Big enough. You'll see here in this image that what happens when you get a successful build is it actually pushes back to GitHub. So this is all working through the GitHub web hooks. So when you first issue a commit to GitHub, it's going to tell Travis, hey, I've got a new build coming in. You need to run the build process. Travis is going to spin up a virtual environment. It's going to install all the software that you've told it. So in this case, it's going to be Node.js, the latest release. And then it's going to run that Mocha process, which is going to run our unit tests for us. And then it's going to send back the information to GitHub and say, hey, I either failed or I succeeded. In this case, it's going to actually say it succeeded. And you're going to be able to visually identify in GitHub that you have a successful branch that can be merged in. So again, saving time, you just look at this and it says, all the checks have passed. The branch has no conflicts. It's ready to merge. You push a button and you're good to go. You're not having to check out the code and do all sorts of checks on it. And the same thing if you fail. It's going to say the checks have failed and this is why. So you can actually see as a developer, this is what went wrong with my code. This is what happened during the build. I can go back and fix it. And as a person managing the project, you can then leave comments for the developer to say, hey, notice some issues with your code. It didn't quite pass muster. Why don't you go back and fix those before I actually commit this, right? So it's taking the ownership of changes to all the code off of you as the open source project manager and putting them back on the developer who's actually writing the code. Which again, saves time, makes it a little easier to manage the project. Again, on pull requests, it has full support for pull requests. The workflow looks very similar to what we just saw, where a pull request gets issued. GitHub sells Travis about the troll request. Travis runs the build and then tells GitHub what happened. Pull request success looks exactly the same. So if you've got your PRs tab, it's going to show you the same success or failure on the actual pull request that it does for the commits. So Travis.yaml, this is sort of the brains of all the Travis stuff. It's your config file. It's what tells Travis what it needs to do, what it should install, how it should run the build process, what should happen if a failure occurs. Everything that you need to know, Travis needs to know about your build should be in this file. And the life cycle of this file looks a little bit like this. So you've got an install step, you've got a build step, and you've got a deploy. Install is what you're going to do to actually set up and scaffold your environment. So any software that needs to be installed into that environment should happen during this step. When the environment first gets spun up, there's nothing there. It knows nothing about you, it doesn't know if you're running Python or PHP or Node, and it doesn't really care. So you have to tell it what it needs to install. In this case, there's some system defaults for each language. So for Node, during the install step, it's going to run NPM install, and it's going to install all the packages that you have in your project for Node. For Python, I think it runs pip install. So there's some defaults per language that you can check in the docs. And there's also the before install step. That's what you'd use if you needed to download like a zip package or something before doing installation. So if there was a custom fork or a custom branch of some piece of software that you needed to load, you could actually load a tar file or a zip file before installation. And then when the install step actually occurs, you can tell it to install that software. The build step, as I mentioned, is this is where like the bread and butter happens, right? This is where everything gets built and it runs your unit tests, it runs your lint or anything that needs to happen to check that project happens during the build step. And it's broken down into a few additional steps before script, script, and then there's after success and failure and after script. Before script is sort of your last chance to handle anything before actually running your build. So if you needed to set up some databases, put some data into them, whatever you might need to do to run your tests, it's sort of your last chance to do that before you run the build. The script step, again, that's where your build happens. And then after success and failure, it's going to happen after a successful event or after a failure event, right? Travis actually enters and exits on a, what's called a zero base. So if you report anything other than zero, it's going to fail. If a zero gets reported, it's going to pass, right? So you actually have these two events that can let you know whether it's succeeded or failed, and when you can use those to do any additional steps you might need in your build. So after failure, you might want to upload some log data to your server to say, hey, there was a failure, this is what went on and up load all the logs from the build. For success, you might want to send a notification somewhere, you might want to upload the project to, you know, a custom deployment server, whatever you need to do. And then after script is sort of the last step. That's what happens after the full build is complete, success and failure have been notified, it's like your last chance to do anything before exiting. Deploy is actually for custom deployments that Travis has created. Again, if you're doing any sort of custom deployments of your own for a service they don't support, you would do that in the after success or failure steps. But for deployment providers that they work with like Heroku, Amazon S3, NPM, you can do them in the deploy step and it's just before deploy and after deploy. So before deploy, you'd be when you'd set anything up again that you needed to do. Deploy is where you actually deploy your code and after deploy is what you do to clean up after the fact. So in an instance we'll look at here in a minute, we're going to actually bump the NPM version, we're going to deploy it out to NPM.js to update our package with the new code and then after deploy we're not going to really do anything but if we wanted to we could send a notification or echo something out to the system, whatever we wanted to do there. So linting, again, this is where you can actually enforce some stylistic guidelines to take a look at what that looks like here. Looks much the same as it did before with our simple unit testing. We've added this little tag to the script line here for JSCS source. JSCS is a package for Node that lets you do linting on your project. You run it from the command line by just doing JSCS source and it'll actually run the linter against your entire project. You can see here I've got a missing semicolon after the statement so it's going to give me an error there. If I go ahead and actually commit this up, let's just do, we'll make a simple change here, add a space or something. Add everything to the project, give it a little message for break the build and we'll push it up to GitHub. That would probably help if I used git instead of bit. So that pushes up to GitHub. Again, that's going to kick off a web hook which is going to notify Travis and we'll actually jump over and take a look at the Travis CI site here and see what happens. When you first come to the Travis site, you're going to see the sign up link here. It's going to ask you to give permission to GitHub and what that will do is allow Travis to access all of your public repositories that you have admin access to so they can pull in the data and allow you to interact with them from the Travis system. I've already done that quite some time ago so it's going to kind of jump past that when I click sign up here and instead it's going to sign in but you just see something a little bit different where it asks you to grant access for GitHub. So you can see here, here's break the build. When you first come into Travis, it's going to take you to your most current build that's happening and you can see here, if that's big enough for you guys, I can build this up a little bit, it's actually going to spin up the build and actually show you what's happening on the environment as it's building. So you can see here it's installing NVM, the latest stable version of Node. You can see the version 6.2.1. It's going to run NPM install that's during that install step so it's going to download all the packages. You can see here is JSCS. That's one of the packages it's loading. It's going to do it install on the system so it's got everything it needs for the build and just takes a second here. If we look at the NPM install too, you can see by clicking on these little tabs on the right here, you can actually see more information about what's going on during that process. So for Node, these are all the sub-packages that are included in that install. And then you can see here that it's going to run JSCS source. That's our script block there. And you can see that there's the same error that we have. We're missing a semi-colon, right? So this is going to actually spit back if this was a pull request or a full commit as opposed to just something that I'm committing to the branch. If I was trying to merge into master or something, it would actually spit back that information to GitHub and show you the error to say, hey, this isn't valid. This is why you need to make an update. So Linting is pretty simple to do. Obviously this is a very simplified example, but usually you'd have 10 or 20 more options that would really refine what your Linting options are. And while we're on the Travis site, let's look around a little bit. Are you guys able to see that okay? It's good? All right. Let's make this a little bigger. So when you get on the Travis, as I mentioned, the current branch is sort of the latest. This is what's been going on with your build. And you can see right now it shows me very easily I've got a big red flag here. There's breaking errors with the build. There's something going wrong. I can look in here to find out more information. And this is persistent, so this will actually stay later on if I want to go back and look at what the build information was. This will all be here for me. There's also a branches tab, which will show you all the branches available. You can see here the Manchester branch is still passing. The Linting branch is actually breaking. You can look back at all the builds and while they broke. And then there's also a build history. So this is the full history of every commit, every pull request, every build that's ever occurred, which allows you, again, to go back in time and kind of see what's been going on. And then you can also filter by pull requests. So you can see here's the PRs that have happened in the past. And if, as a developer, you know, if I'm working on this project, I want to come in and see just what's been happening over the last couple of days, I can look through here and see all the pull requests and then go back over to GitHub and decide what to merge. There's also a help section here. Actually, look at one more thing here. This little plus tab, I missed this at first. This is actually pretty important. This is where you enable and disable projects for Travis. So after you've added that Travis.yaml file, you've committed it up to GitHub. The last step there is to actually enable it by clicking on this little green button here. So I've just disabled Travis for the project. It's not going to kick off any more web hook events. If GitHub notifies it of anything, I mean, it's not listening, basically, at that point. Clicking it on, that activates it. The only other thing you need to do, and you don't have to, there's defaults in place, but if you want to do any more custom configurations, this little gear icon here, you've got options for building only if Travis is present. So if that yaml file doesn't exist, you don't have to kick off builds because they're not going to really do much anyways. You can build pushes or not. So for some reason, you don't want to build pushes. You have the option to turn that off. The same with pull requests. And there's also you can limit concurrent jobs. If you're doing a lot of builds and it's taking up too much space and they're running really slowly, you can limit how many builds can happen at one time. This is also useful if you're doing what I mentioned with the matrices. So if you're having more than one version within your build and you only want a certain number to happen at a time, that's a good way to go about it. And this is also where you'd set environment variables. So if there's anything that you need to be available in the environment, like tokens for authentication or something like that, you can store those here and they'll actually be available in the environment when it builds. And the other section I was going to mention was the docs. They have a really great documentation site. They've actually got guides for a lot of specific languages. You can see here the programming languages that are available. As I mentioned, there's about 30 now. And there's also language specific guides for most of them. So if you're working in Java or Objective C, you can actually follow step by step how to make them work for that specific language. And they've got great information on that. This is also where you'd come to learn about any of the specifics of setting up notifications or databases or anything else. They've done a really good job with the docs, though. So jumping back over here. So again, with Linting, it's very simple to set up. You use your JSCS source if you're using Node. For your language, whatever it is, you can set that all up. And then your failures are going to be reported. So database support. This is another huge piece that Travis did really well in my opinion. They've made a bunch of services for these various databases. So all you need to do is identify the database that you're actually working with and tell that in the Travis config file in the YAML. And you'll actually spin it up on the server. By default, these don't get installed to save memory and to save space, you know, make it run faster. So if you tell it you need MySQL, it's going to install it on the fly before your project runs. So during that install step or just before. So if we look here at let's do databases example. I've updated the Travis.YAML file here to use the services key. I told that I want to use CouchDB. You could do Mongo, you could do Redis, you could do MySQL, whatever it is. They all have a little bit of different setup options. And again, those are all available in the documentation. But for Couch, it's very simple. You just tell it I'm going to use CouchDB. And that's going to tell Travis when you spin up that environment, I need you to install CouchDB so it's available to me. In this piece here, Couch actually uses HTTP requests to set up their databases. So I'm doing a curl and then telling it use the put header type. And then I'm going to call out to the database. This is the path. And I want to set up this database called Travis node demo. The second part here is actually just going to call out to this all DBs endpoint. And all that's going to do is return and say what DBs are available. Basically ensuring that we know and spitting out to the command line that our database was in fact created. So we ran that. That's really tiny. It looks something like this. I'll spit this up a bit. So we'd get back from the Travis build cycle in that step that we saw a minute ago. You'd actually see, okay, true was our response from creating the database. And then if we pulled and saw all DBs, we'd actually get this back. You can see the Travis node demo of the database is now available. Now this is done in the before script step. That's the step that happens right before the build actually gets kicked off. So again, you're going to need the databases available for any testing you might do during your build. So you'd want to do this in the before script step. And this is also where you do any additional things like adding data to the database or populating it with information, whatever you might need to do there. But again, this works much the same for any of the databases you might be working with. It's a very simple process. So the Travis clients is worth mentioning. They have a command line interface client. I think it's Ruby Jim and you install it through whatever package manager you might use for your system. I used homebrew to install it, I think. And the command line client lets you do things like encrypt your keys. So as you look at adding new services for notifications like Slack, which we'll look at here in a second, you can actually encrypt the keys that you're going to store in your Travis file. So you don't have to worry about handling that yourself. Travis has a method for doing it through the CLI. It also lets you do things like lint and test, verify, validate, whatever you want to call it, your Travis file. So I've got it installed already and it's just run under Travis. So by doing Travis lint, it's going to look for the Travis.yaml file and just validate that that file is actually valid. So you can see, hooray, Travis looks valid. We're good to go. If there's any errors with the file for formatting or keys that it didn't understand, it would actually report them here and let me know what's going on with it before I uploaded it. So it's great for debugging. And as we'll see in a minute here, you can also use it to add information to your Travis.yaml file. So deployments, I mentioned, Travis has a bunch that are currently available and they're adding more all the time. Some of the big ones, Heroku, NPM, PiPi, Amazon S3, RubyGMS, and GitHub. So as you're managing your projects, that was something else I found quite painful is every time we did a new build on the project, we'd have to push it up separately to NPM. And I'd have to manage that myself. So every time somebody did a pull request, I committed the code in, we merged, we had a new version to do, I'd have to do an NPM update, update the version, then I'd have to push it up to NPM.js. It took a lot of time. So luckily, Travis has an easy way to handle that. So again, we're going to use this before deploy, deploy, and then after deploy steps. These are only, again, for the custom integrations that Travis has actually built. If you're doing something like your own integration service, or rather your own deployment service, like you want to deploy it out to your own server, you would do that in the after success or after failure steps. But for the integrations that they have, we're using DeFoRDeFloy for NPM version patch. It's just going to bump the version from we have a commit. And then after that, we're going to use the deploy key here. We're going to list the provider as NPM, supply the email address that's for the account holder on NPM, and then we're going to give it our API key. Secure here just tells Travis that you actually encrypted your key using the Travis linter, so it knows to decrypt it before it tries to run it. And then the last piece here is on. You tell it when you want it to actually push these builds. So if you want it to push for tags only, you can say true. If you want to push just a specific repository, in this case, this is the repo we're using. And if you want to use all branches or maybe just your master or your production branch, you can set that up as well. So there's a lot of configuration options for when it will actually push out your new version. So you can fine tune this however you might want. So we can take a look at how that actually looks in the project. So you can see here, I've still got my CouchDB. I've still got all my information for Curl and installing JSCS. One other thing that I found that was a particular issue for me, like there was no information in this in the docs. This was kind of a stumbling block at first, is I had to add my global email address for GitHub into the before script key. And the reason I had to do that was because NPM actually looks for that when you push up the build. Or rather, when you're doing a commit to NPM from the command line, it looks for your GitHub email address. So I had to do the get config global for user.email and then I also used user.name so that actually tags that commit to NPM with my name. So if you run into any issues with deployments, look for the global configs for the GitHub account and there might be a problem there as well. Again, here's the before deploy. So we're going to list out the NPM version before we patch it just to see what the current version is. All that's going to do is spit out the information to the command line so we can take a look. And then we're going to run a patch to update the version. Afterward, we're going to actually call out to NPM and here's my secure token that I was talking about and it's going to push out the latest version. If I delete this real quick, I can show you guys what the Travis Linter looks like for this. So you do Travis NPM. And they have a number of these available for various providers. It makes setting them up a lot easier than having to punch that information in yourself. What is the command? I guess we'll get a good look at what's available. No, it's not listed. I just used it earlier today. All right, well, so here's the easy thing. We'll get a better look at the docs. If we just search for NPM, it's setup NPM. And again, they have these deployment guides for all the various deployment services that they work with, so they're really helpful. So if we do Travis setup NPM, Play section already exists, so it's going to notify you if there's already a section in your file. So if I just hit save there. So it's going to ask for my NPM email address for my API key. I'm just going to make something up here. And then whether or not I want to release only tag commits. It's kind of a helpful feature if you use tagging as part of your release process. Release only from this project. Yes. No, I want to release all branches and encrypt your API key. Yes. So what this is going to do is actually add that section. The unfortunate part is I like to format my stuff a little bit differently than Travis, but it actually does a cleanup on your whole project. So you can see it's changed everything so that the lists all start on the same line. There's no tabbing to actually make this a little bit more readable. But you can see here it's out of that whole deployment section for me, which is really useful if you don't want to read through all the docs and have to go through adding all this stuff by hand. So yeah, you end up with your deploy section. There's a bunch of those command line tools available. So if we just make a quick change here, take a look at what this looks like. So we're just going to push up a simple change just to kick off the deployment process so we can kind of see what that looks like. If we go back to Travis CI, you can see it's still now it's showing the deployment. So sometimes it'll show, it'll be a little bit lagged behind and it'll show the last build for the current. If you run into that, I usually just go to build history because you'll be able to see even if the build is still taking place, it's always shown here. So if the current isn't showing you the current build that you think should be running, you can always check it here as well. So if we click into that, we'll be able to see the build running. So right now it's cloning out the Travis node example repo. It's installing CouchDB and running the process. Now it's updating NVM and it's going to install node. Just bear with me, guys. Sometimes this takes 30, 45 seconds. So it's downloading everything it needs for node. Now it's running npm install to install the packages for our project. You can see there's that git config I mentioned. So it's actually setting up my global git email address and name in this environment so it's available. Then it's going to run our unit tests with, sorry, our linting with JSCS. It's going to set up that database. Then it runs our tests. Our tests all passed. So far it's made it through. You can see here's that exit with zero, which means it's a passing build. Now it's installing any dependencies for the deploy. That would be the beat forward deploy step that we talked about. You can see here again, this is really useful stuff on the right. If you click through on these, you'll see some additional information. Obviously we didn't install anything for the previous deploy or for pre-deploy. Preparing deploy, you can kind of see what's going on behind the scenes. So it checks the npm version. It authenticates to npm before we can push this all up. Then does a git stash. So if there's any changes that occurred actually within the Travis environment, it's going to clean that all up before pushing it out so you don't have a bunch of remnants or files that aren't needed being pushed out to your build. Then it's going to update the working directory and push it back to GitHub. And you'll see there that it deploys the application. So it's worth noting again these little guys on the right here because I had some times where this was actually failing for me. The deploy was failing because I hadn't bumped the version number on my local repository and I tried to bump it here. And so essentially what was happening is that I was updating it to the version that was currently on npm. It was already released. And in doing so, it would actually fail. But it was failing silently because I didn't have these expanded. So it would say deploying application and everything looked great. It doesn't look like there's any issues. But in clicking this, I would actually see there was an error in the deployment process. So again, if you're pushing something out to a deployment service and it doesn't seem like it's working for some reason, you can't figure it out, pay attention to these little tabs on the right and check out if there's any additional information available. But you can see here this deployed out to npm. And the version we're at now is 1.0.11. So we bumped the version. Now if we go over to npm and it was previously at 1.0.10, we do a refresh. There's 1.0.11. So again, this could have been a pull request or a commit from another user. And given my requirements from Travis, I could have built the whole project, verified it and with no hands on at all, I could have pushed out the project, right? So you can start to see how this would save you a lot of time and a lot of effort managing your projects. So notifications is another really cool part. There's a lot of options for notifications with Travis. These will let you know when a build occurs. You can set it up to tell you when a build fails or when it succeeds or when it starts. You can really fine tune it to do whatever you want. Some of the available notifications stuff, email, IRC, hipchat, Slack is a big one and web hooks. Web hooks are really useful if you use any custom web hook stuff in your own company or in your own environments. So if you wanted to set up some custom integration with maybe Twilio to send you a text when a build happens for whatever reason, you could set up some sort of a DevOps tool to do that. So the notifications section looks just like the rest. It's got an identifier of notification. Underneath that, you can set up email, Slack, IRC, whatever you want. For email, it's on by default and it's actually going to push it out to the owner of the project on GitHub. So if you're the admin for that project, it's going to notify you by email by default every time a build succeeds or fails. You can customize this to send it out to any recipients you want or to change your email address. So if you wanted to go to your personal email address instead of your work email address or something like that, or if you wanted to add additional recipients, you do that all here. We can look at this in the project as well. So you can see here's our notifications section. I've got the recipients list set up to just my personal email address, but if I wanted to say also send it to my work email address, I could just add it to the list here. And then it would send it out to both of those addresses on my options. On success and failure, I've got them set to always, so it's going to send me a notification no matter what. You can also set it to never. So if you just don't care about failures or if you only wanted to know about failures, you could turn success off. And then I've also got a configuration here for Slack. Just to look at that real quick on how you set that up. You go to whatever your Slack channel name is,.slack.com slash apps. This is where you install applications. And this is all in the docs for Travis as well. But there's actually a Travis CI app. So if I just go look at the Travis CI here, you'll see here's the Slack channel that I'm working with. And I'll just say install. It's going to install the Travis app for Slack, which makes it really easy to set this all up. And then I'm going to choose a channel. So I'll just go with build notifications. It's a channel I'd set up previously to handle all of my notifications. And then just add integration. Now, I was particularly impressed with this whole process because it made it extremely simple to copy and paste this stuff into my config file. So you can see here, if I want just simple notifications, I can copy and paste here. If I want to send it to more than one channel, I can copy here. If I want to encrypt, this is the actual command line client for adding these using the Travis CLI. So it'll actually take my token and my channel name, and then it gives me the full line to paste into my command line just to encrypt that data and add it to my config file. So if I ran this, it would actually add something to my Travis file that looks like this. It would add the Slack piece for me and the secure encrypted token for my Slack channel. So this will start giving us notifications then every time those events happen. So again, for emails, it's going to look something like this. In Gmail and inbox now, they're actually tagged right in green. So you can see based on failures or success right on the way. And again, this helps you manage your project. And then Slack, this is really helpful too. So if you set up your own Slack channel for your project, you can actually monitor these throughout the day and know when something goes on. So if a build gets committed and you see red, you know that you can click through and just go back to that commit and say, hey, this is why your project failed or why your commit failed, maybe make these changes before committing and then push it back up. And again, this is really placing that responsibility back on the developer who's committing the code rather than making the project owner be responsible for everything with the goal again of saving time and making things a little easier for you. One last thing is build status badges. This is actually kind of a fun feature. And once you start looking for these things, you'll see them in almost every GitHub repo. This is sort of the telltale sign of the fact that they're using Travis. To add these, it's very simple. Let's go to... Actually, it's right here. So you can see this build is failing. And I didn't find this at first. I actually had to look in the docs for this, but you just click on the little icon here. And they have available some really nice options for this. So if you just want a straight image URL, or if you want to grab it as markdown to put it in your GitHub readme, they have it as R-Doc, RST, ASCII doc, all these options. So whatever you're using in your builds, you can actually grab these and append this. So for GitHub, you just add it to the readme at the top and you'll end up with something that actually shows you the current status. You can see here I've got it in my project. Build's currently passing for the master branch. And this is just a great way to notify your developers of what's going on as well as just notifying yourself. So again, the whole point in this is to make open source fun again, right? Managing projects should be fun. They always start off as this kind of great joy. You're putting this project out there in the wild. You're helping people with stuff. You're sharing your code and really sharing a part of yourself. So by adding Travis, I'm really hoping that we can kind of make this process a little bit more streamlined, save some time, and make this make more fun again. If you guys have time on the way out, please leave some feedback. It's always helpful to us as speakers. I really appreciate that. And thank you guys so much for sticking around. I know it's the end of the day. It's the last session of the conference. You guys are the hardcore few. So thank you so much for coming. And I'll be around for QA if anybody has any questions. Do we have any questions? I have a comment. I was digging through the health doc and they will take C-sharp and visual basic and F-sharp and try to build it on mono, which means like certain simple projects may work in.NET. Okay. How many.NET developers in the audience? All right. So the one guy, two. Yeah. I'd say like 70% of the people on that at this conference were.NET developers. So that was actually really useful. I was reading up on that today, but I wasn't familiar with mono. Is that a build process? Okay. So you can install that on Linux then? Yeah. It runs on Linux, but it's not. It doesn't support all the features of.NET like some of the UI framework. Gotcha. So for those of you on the recording, if you couldn't hear that, yeah, he's saying there's some information in the docs for Travis on using mono to build your.NET projects. And again, from what I've been told, they actually are working on native support for.NET and Windows platforms. I just have no idea when it's going to come out. I'm actually pinging the Travis team earlier today to try to find out, but they didn't get back to me quite in time. So hopefully we'll see that in the future. That might change drastically for the better with the.NET core building on all the different platforms. So Microsoft embracing open source might get Travis as well. Yeah. So the point he was making is that the new ASP.NET core and the core system makes it run, are available to run on Linux, right? So the fact that that's open source should make the integrations easier for the Travis team and also hopefully make it a little bit faster for them to release that. That's a great point. Questions in the back? All right, guys, well, thank you so much again. I can't tell you how much it means to me that you stuck around for this. So thank you very much. Cheers.
Continuous Integration (CI) can save you time, reduce the hassles of managing open source projects, improve the overall quality of your code, and make merging a joy. But it can be a bit difficult to understand at first and a chore to setup and manage. In this talk we’ll break down the concepts of Continuous Integration, and take a look at Travis CI, a free, hosted solution that makes it extremely easy to add continuous integration to your open source projects. Once we have a handle on the basics, we’ll move on to more advanced topics including sending build notifications, running unit tests, linting, and automatically deploying successful builds. While the example project used in this presentation will be written in Node, the concepts discussed, as well as Travis CI itself, can be applied to almost any language, bringing value to developers from all backgrounds. Come learn how to make managing your open source projects fun again, with continuous integration and Travis CI!
10.5446/51713 (DOI)
You know the problem with Hollywood is, they make shit. Unbelievable, unremarkable shit. Now I'm not some grungy wannabe filmmaker that's searching for existentialism through a haze of bong smoke or something. No, it's easy to pick apart bad acting, short-sighted directing, and to purely moronic stream together of words that many of the studio's term is prose. No, I'm talking about the lack of realism, realism, not a pervasive element in today's modern American cinematic vision. Hello! Thank you, John Travolta, for the brilliant intro. That's from the movie Swordfish. Who's seen Swordfish? Yeah, we're gonna see some of that later on. That is the opening scene of the movie, and you just go, oh, it's gonna be realistic. He's kind of conscious and self-referential, and he's taken down the fourth wall to talk about how realistic it's gonna be, and then no. So yes, this talk is just a nice easy, simple, fun thing for the last slot of the conference. Switch your brain off, it'll be much funnier that way. Try to suspend your critical faculties, and yeah, just enjoy it. I wanna get one clip out of the way right at the start, because everybody always just goes, oh, have you got that clip? So yes, I have got this clip, so let's just get it out of the way right at the start. Everybody recognizes that operating system, yeah? It's a Unix system. It's a whole new world. Yeah, yeah. When this came out, I was actually working with Unix systems, my day job, I was working all day, and we had wise terminals, and if you were lucky, you had a wise 60, which would do black and white, but if you were unlucky, you had a wise 50, which was black and green, and yeah, it was just like LS minus L and tar and CPIO and VI and all those things. That was not a Unix system. That was Virtua Jurassic Park, the video game. And also, that's a Silicon Graphics Iris Indigo Workstation. They would set you back $20,000 at the time when this was made. So of course, she's been learning about those at school. Yeah, because the American education system is just that well-funded. So yes, but the first movie I ever saw, the thing, I got my first computer in 1982 when I was nine. It was a ZX81, and I got really excited about computers, and I got it to ask me my name, and I typed in my name, and then it said, Hello, Mark, and then my mum did it, and she typed in her name, and it said, Hello, bitch. Because there was a whole if name equals Erica in there that I didn't like my mum. But yeah, and I got really excited about computers, and then movies about computers started to come out as well, and the first computer-based movie that I ever saw was this one, Tron, in 1982, and it was amazing. It was the first film with computer-generated imagery that just relied entirely on computer-generated imagery, but there's this fantastic scene at the start when Flynn's kind of talking to the Master Control Program, and then suddenly this happens. How young is Jeff Bridges? Oh my god, it's just... So yes, he's typing stuff. This is a 1982 artificial intelligence. This is an amazing stuff. What? This is the thing, right? You're entering a big era. What is that piece of hardware? Why has he got that in his office? What is it actually supposed to do? I mean, I assume the computer has taken it over and is doing something unrealistic with it, but... Is it a printer? Is it like... it's an early iteration of the Philips Hue light bulb system? I don't know what that is that that's supposed to be, but yes, and then he goes through all of this down into the computer, because this is what the inside of an operating system looks like. There's lots of lines flashing all over the place, and then you go down. When I was much, much older, I wasn't nine when I did this, but I took an absolute shitload of magic mushrooms and then watched this on the biggest television I could find, and it's amazing. It's just completely brilliant. But yeah, he gets downloaded into the computer. But actually, my favorite thing about Tron is... Eight years later, when I got my first job and I started working on Unix systems, and it turns out that Tron is an actual command. It turns something on. I can't remember what it is. It's text something on, and then there's a trough as well, which turns it back off again. So yes, from this, I learned that if you worked with computers, then you could go inside them and play lots of exciting games and get covered in glowy lights and all this sort of stuff, and that was brilliant. I definitely wanted to be a computer programmer when I found out that there were mad lasers that would suck you into the machine. And the next film that I saw taught me the next thing that I learned from the movies, which is, all computer systems have a back door. Okay, this is a truism of computers in the movies. The people who make computer systems always leave a back door so they can access. I mean, hands up here who leaves a back door in every computer system. Fortunately for you, the camera is pointing this way, so the FBI doesn't know who you are. But have we got the audience mic'd for laughter this time? Because when people watch these videos, they go, why was nobody laughing? It's because we didn't give everybody in the room an individual mic so we could hear them. But anyway, so yes, trust me, people watching this on video. A, you suck, you should have been here. And B, they are laughing, I promise, no really, even when they're not. So yes, but now I found out that all systems have a back door from this in 1983. Fantastic movie, and there's some great... If you haven't seen this, you should watch it just so you can see what computer technology was like. And if you work with people who don't know what a floppy disk is, by the way, then point them at this movie because this is from the days of 8-inch floppy disks. The ones that were actually floppy. So yes, but no, there's this brilliant scene in this. He's trying to get into this computer system that he's found. And so he goes to see the experts on this stuff. I want to play those games. You're not supposed to see any of that stuff. That system probably contains new data encryption algorithm. You'll never get in there. Hey, I don't believe that any system is totally secure. Definitely not. Yeah, I bet you could. Well, you'll never get in through the front line security, but you might look for a back door. I can't believe it, Jim. That girl's standing over there listening and you're talking about her back doors. That's the potato head. Back doors are not secrets. No, everyone knows about back doors. They're not tricks. What's a back door? Well, whenever I design a system, I always put in a simple password that only I know about. That way, whenever I want to get back in, I can bypass whatever security they've added on. Sometimes that simple password that only I know about, by the way, is da, da, da. If you want to get in, find out as much as you can about the guy who designed the system. I just think you could get root access to Facebook's MySQL database with da, da, da. You guys are so dumb. I figured out all by myself. Oh, yeah, Melvin? How would you do it? The first game on the list. Go right through Falcon's maze. Yeah, go right through Falcon's when he gets into the computer and then it gives him this list. Do you want to play Tic Tac Toe or Chess or Monopoly or Global Thermonuclear War? So, yeah, obviously you're going to choose Global Thermonuclear War. So, yes. But no, all systems have a back door. And this was confirmed by this much later film where even if you've created a virtual reality for people to kind of upload themselves into and everything, you're still going to leave a back door in there just in case the sentient monster that you've created needs to get out. Do you think that's one of the access, Greg? A back door. A back door. So, yes. He got out just in time there through that back door. That's very lucky for him. So, yes, every system has a back door. I have also learned this next one. For some reason that is completely inexplicable, killer robots, right, through their vision system still need a user interface and so forth. And I learned that from the original Terminator movie. It's a great movie. It's absolutely fantastic. And I'm sure that you can basically just go, well, no, the computer stuff is all accurate. It's just a question of how far in the future this crap happened. This is like post-singularity. Look, I've even got the t-shirt. And so, but you can see that something bad has happened in the release process for this Terminator. There is no sound on this one. Don't stress about it. Because he's got all this information just coming down here. It's coming down here. And you just think, why would he need that there? Surely that's just getting in the way. That's just distracting him. And then I figured it out. Basically, when this one came off the production line, they put a debug build of the software on there. And it's still got all the trace.write lines and everything. And that's the only reason that Sarah Connor escaped. And Kyle Reese was able to save her. And the Terminator didn't change the future forever. It's because some idiot accidentally put a debug build into production and then sent it back in time. Otherwise, humanity would have been doomed if it wasn't for that simple mistake in the continuous deployment system on the T800 production line. Then we would have been screwed. But no, all these trace.write lines were still in there. That's fantastic. So that's, yeah. And you can see this. Any time you see a robot chasing people around, and you can see from that robot's point of view, there is always an absolute shed load of information on the screen that they're looking through. Because, yeah, for some reason, Robocop, I can understand. Okay, Robocop, that's still a brain in there. He still needs to be able to read the thing. Is this a machine? No. So what else have I learned? I have learned that this is definitely true. Deleting files off a computer. If you go onto a Unix system with its 3D interface, but manage to pull up a console and do rm-r slash, then you'll be able to know that you've done something wrong because your screen will just slowly melt down to the bottom and go blah, and start fritzing and all this sort of stuff. The best example I could find of this is from the net starring Sandra Bullock. If you haven't seen this film, then just feel very pleased with yourself, and don't put yourself through it, because she's got this disk that's got a virus on that just deletes all your files. It's like an early attempt at ransomware. And, yeah, so she's logging into a mainframe somewhere. And, of course, that key up in the corner there, that's the delete all files key. Everyone knows that escape means delete all the files, and then you can see that they use a Quantel Paintbox tessellation effect on this particular operating system to show that your files are being deleted. And very nicely, it kind of tessellates one, and then brings up the next file it's deleting, and then tessellates that, and so forth. It's like when you watch these things where they're going, we're looking for a face that matches this face. Why would it display the face on the screen? You know, you've got this face here, and that one there, and you're just going, surely you're using GPU cycles to display the image that could be better used trying to work out whether it's the same person. But this is why hacking is so easy, is because people don't actually understand what goes on. And then you get a movie that comes along and just blows the lid off hacking and shows the real true story, what it's actually like to be a hacker. This was the first film that I think really just got to the core of what the hacking experience is actually like, let you feel like you were in the shoes of a proper, hard core, not a script kiddie hacker. This is just a perfect representation of what it's like to hack into a computer system. You go through the screen, and then for some reason city streets, and then you go down this thing here, and then you go through the lobby of the computer room that you're hacking into and down a hallway, and then over the feet, and then you get in there, and then this, I don't know what operating system this is, but I won't say it. Is this what SteamVR is like? Is this what the Oculus Rift interface should be? If it's not, they are totally missing a trick. I want to be able to go and f-disk my computer through this interface. I want to just make sure that the backup's properly happened. Yeah, this is a totally accurate thing. The other thing that I learned from hackers, which you can almost see here, is that computer screens, particularly LCD computer screens in those days, because nowadays we all take it for granted that we've got a retina resolution display, and it looks just like, I've got a Dell XPS 13, one of the new ones, and it's just like a piece of paper. The blacks are completely black, but back in 1995, they really just started with LCD screens, and they had a couple of problems. The brightness levels on them were just really badly adjusted, and you could see, I mean, look, in this scene here, they're looking at this computer. It's projecting onto his face. It's projecting the screen. And it's great because they're trying to show off by how much they know about computers. Indeed. Risk architecture is going to change everything. Risk architecture is going to change everything. It's just random words that they throw. This is a word I read in computer weekly. You say those words, and it'll sound like you know what you're talking about. But yes, but you shouldn't be a hacker, though. Don't do hacking. Hacking is bad, because if you do do hacking, then the police will catch you, because the police are just really hot on this order. They have the best people in the world working against you if you try to hack. And even just the regular police, the crime scene investigation guys who are just like... Where is the original crime scene investigation? New York or wherever it is? Las Vegas, isn't it? So yes, the Las Vegas police, they have the best guys in the world. They will just catch a hacker without any trouble at all. They've just got these mad skills. For weeks I've been investigating the canvular murders with a certain morbid fascination. This isn't real time. I'll create a guino face using visual basic. See if I can track an IP of this. That's totally what I would do. It's just... The thing is, because we do what we do, we know that that's stupid. But it always makes me wonder when I'm watching, like, medical dramas, and somebody says, I need 500cc of toilet cleaner stat. Or would we be able to notice because they'd be going, I need to bandage his heart to a drain pipe. So yes, sometimes though, sometimes just one expert is if a hacker's really good, sometimes these hackers are proper, just like a real person. And they're like, I'm not a real person. Just one expert is if a hacker's really good, sometimes these hackers are proper, just elite hackers. And sometimes they're so good that you need more than one of the police experts to be able to keep up with them. And this is from NCIS, Naval Criminal Investigations. I watch this every week because my wife loves it. And they've got these two of the people on their team are brilliant with computers and science and maths and all this sort of stuff. But sometimes even they are forced to work together. No way. I'm getting hacked. Of course, Steve. Yeah, so you can tell she's getting hacked. But yeah, so it's too much, even for Abby Shuto to be able to keep up with this. And so Tim McGee has to join in. This is the third user connection, the interest database, Severus. It's kind of like, and they just, they've pre-organized this just in case this ever came up. It's going, right, you type the words with Q, W, E, R, T, A, S, D, F, G, Z, X, C and V. And I'll take care of the other letters and I'll hit enter. I've never seen code like this. Just completely brilliant. So yeah, and then Independence Day popped up. Independence Day had some very interesting information for us about the nature of aliens. When we do make contact with, I mean, I honestly believe that there are millions of alien civilizations out there. And some of them are bound to turn up and try and steal all our water or something eventually. It is just a matter of time. So it's good to know that when they do turn up, their spaceships are going to come with Apple Talk Networks. So that we are able to access their internal systems. Because if they were using some kind of alien technology, we'd have an enormous problem. But no, we're not going to have any problem at all. We're just going to be able to steal one of their spaceships, fly up in there with a Mac Powerbook, and use the rich prince of Bel Air. And there we go, negotiating with host, negotiating with alien computer. Oh no, it got online. It's online with the vastly superior technology. And yeah, there we go. We can upload the virus because we know what architecture they're using. It's probably a risk architecture because risk architecture is going to change everything. And you sort of think, well, even if you could connect to the alien computer, what are you going to be able to do that's going to take this alien computer out? And of course, what you're going to be able to do, as everyone who's been on the Internet will know, is display some animated GIFs to piss them off. And that's going to work as well because the GIF format it turns out wasn't invented by anybody. It was just waiting there to be discovered. And part of the evolution of every intelligent life form in the universe is to discover the animated GIF, which means that Jeff Goldblum is able to do this. There we go, he's uploading Jolly Roger. And now they're waving into the alien dude, and he's just going, that's never going to work. It's not going to be able to recognize that file format. How could it possibly ever do that? And he's going to be going, that animated GIF taking a long time to load up, it's probably a chrome-tackle. Look, no, there it is. Yes, yes. The aliens LCD screens, which are running over a VGA connection, are decoded. And it's displaying on every screen in the entire spaceship. You guys have been 30 seconds. I heard a fat lady. You're obsessed with a fat lady. Yes, stop talking about the fat lady. You're violating the code of conduct. Antitrust. Antitrust, I remember when Antitrust came out, there were lots of reviews that actually said, you know what, this one is really quite accurate. This one's obviously done its research. And they've spent a lot of time talking to programmers at Microsoft about this, because it was all based off the Microsoft trying to make everybody use Internet Explorer 6. And so forth. And so they did actually interview a lot of people at Microsoft and say, okay, so we want some kind of technical stuff. And I get the feeling that every single person they talked to at Microsoft had already agreed in advance to take the piss out of them and just go, oh, well, so, yeah, computers, we have our computers upside down for secure, whatever. Just think of the stupidest shit you can say and tell Hollywood, because they will believe you. One of the things that Antitrust taught me is that really good programmers can feel the code. Okay? If you are a good enough programmer, you can just look at a screen full of code and just go, oh, that's nice. Oh, that's the... Oh, I'm actually going to have to go off to the men's room and think about that code for a bit. Like in this scene here where evil Bill Gates dude is going to see young, innocent, naive other dude. In the middle of something? Ryan Phillipe. No, I'll check this out. Yeah, there you go. Now, of course, you do carry code around on CDs. You don't use your corporate network to transfer this across. So did this, Josh, or Lenkhead? Somebody, yeah. Yeah. Did you use it? Yeah, the compression is awesome. The compression is awesome from one screen. Actually, look, there you go. It's... Awesome compression. Oh, man, the stuff this guy is doing with a byte array, it's unparalleled. I've never seen anything like this before. It makes me want to stop being a programmer because I just bow down before this guy's ability to index bytes in an array. It's fantastic. I love the idea that Microsoft guys were just going, let's just tell him stupid shit. But, and also Tim Robbins character, he got that code by not by installing like keyloggers on other people's computers, or just by hooking into their system and copying the files off. Now, he got this code by putting cameras in all their offices, like little hidden cameras, and then watching over their shoulder as they were typing it, and he types it onto his camera. You sort of think there's, you know, be a better way, but clearly he's not that good with computers. So, yes. So back to John Travolta from the start of the talk. This was this movie, Swordfish. And Swordfish is every possible kind of offensive. It's just the tech stuff is offensive. The treatment of women within the movie is offensive. It's just so very, very offensive. This is sort of where Hugh Jackman, sorry, Wolverine has been, he's been hired by Vincent Vega to break into a thing. And Catwoman's there for some reason. This is a Marvel DC crossover, Wolverine and Catwoman in the same thing. She's making them drinks. Uh-uh. How did you do it? I don't know exactly. I just see the code in my head. I can't explain. Sees the code in his head, yes. He did not drop a logic bomb through a trapdoor because that's not a thing. And so, yeah, he won't tell him how he did it because somehow he hacked into the FBI's most encrypted system in under 60 seconds, whilst something was happening that would probably have been distracting. And he was drinking a Margarita at the same time, I believe. So John Travolta goes, this is the guy. This is the guy we need to do the thing that we need to do. And so he gives him this computer and you can tell how powerful this computer is because as everybody knows, computer power is measured in the number of screens the computer has. And so just imagine how powerful this computer is. So powerful! Look, I think even though he's just recovered from the thing that was distracting, I think he may be up for it again just from looking at those screens. It's very difficult not to violate the code of conduct when I'm doing this talk. It really, really is. But Wolverine, though, they haven't just hired him because he's the best hacker in the whole world ever and can feel the code in his head. They have hired him because he is the coolest programmer ever. I don't know about you, but when I'm working on my side projects in the evenings in my study at home, my computer is not that powerful. It's only got two screens. But it's the best I can manage. My wife won't let me have another eight screens because we have a small house. But this is basically exactly what I look like when I'm working on my open source stuff in the evenings. This is how I write code. Yeah, see, I smoke my... And you have to have a cube. Please. You have to have a cube. These stages move around quite a lot. But no, you have to go. I type exclusively in numbers on the top row of the keyboard and I just spit myself around. And I know when I've got my open source code working properly because it turns into a Rubik's Cube with all the colors the same on the same side and everything. But, you know, sometimes it gets away from me and all the little cubes fall off. Oh, no, the cubes are falling off the bigger cube. Dammit! I'm going to just bang my head on stuff and go and open another bottle of wine and have another cigarette there. That'll help. You just need to take a break and relax for a minute and smoke my cigarette vertically. Big as that's alcohol. Oh, no, hang on. Hang on. If I creep up on the computer like very so it doesn't see me coming, then maybe. Yes, yes. Oh, yeah, no, I'm back in the group. The cubes. The cubes are going on. Yes, they are. And there we go. The cubes have gone back onto the dancing. I'm standing up to type. You know when you're doing that last bit before you do the commit message, you know, it's red, green sitting down. But, refactor, you have to stand up for that. Look, look, the cubes. The cubes are all on there. They've got their cubes on. We can do a bit of dancing. Oh, yeah. Oh, yeah. Look, he got the last cube on. That's a worm, apparently. I just, you know, if I was John Travolta and I came in and he went, look, I've got all the cubes on, I go, what the fuck is that? I'm paying you to hack into the FBI database. You're playing 3D Tetris. So, yes, I'm probably going to get banned for doing that now. But there are some common themes that you get throughout all of the movie computer things. One of the common themes that you may have noticed is that out there in the world, this is in the world where every major metropolitan area in the United States only has 10,000 phone numbers because they all start with 555, you see. And actually, they've got that for IP addresses now. All the IP addresses start with 172 or 10 or 168. But, yes, for some reason, in Hollywood's version of the world, there is no such thing as Windows. Nobody uses Microsoft Windows in the Hollywood world. Everybody uses, assumes, some variant of Linux with a skin or a theme that you can't download off the Internet. A recent movie, and actually, to be fair to this movie, well, two things. One, it's got four in it, which is completely brilliant. And two, quite a lot of the hacking stuff, they obviously did some research and they obviously talked to some people who weren't just taking the piss out of them. So, there is some quite realistic stuff in there. There's some interesting uses of the Bash shell as an Internet chat program. But still, there is some good stuff in there. But nobody has Microsoft Windows, which is quite handy because Microsoft Windows is useless for hacking with. Everyone knows that. This is a Korean restaurant or a Chinese restaurant somewhere just in the depths of China. And, of course, this restaurant is using, I believe, its Kylix, the Chinese version of Ubuntu. But, yes, absolutely, no, it doesn't matter. You will never, ever, ever see a Microsoft Windows system in a Hollywood film unless Microsoft have paid an enormous amount of money for them to use Windows phones so that you can just get the view over the top there of the little tiled view so that they can try and crack that 0.5% market share. Yeah, there we go. I tried pinging that address and it turns out that 2.8.4 is not a valid part of an IP address. This is the other way to make sure that you can't do it, is just to go, well, just put something more than 2.5.5 in there and that will not work. But, yes, this is... So, I don't know what... There is actually, there is an app or a command line application on Linux called write, but this is not how it works. You don't get your bash prompt while you're running it. It's possible this clip goes on too long. But, no, but actually Blackout is a pretty good movie and I didn't mind sitting through it three times writing notes for when the useful scenes were. But, no, the other theme that has been put out in the past is the fact that you can't do anything about it. And, yes, I'm not writing notes for when the useful scenes were. But, no, the other theme that has been true ever since 1982 and Tron and actually films like Proteus and various other things from the Dark Ages is that the computers eventually are going to kill us all. This is actually completely true. You just have to ask Elon Musk, he knows, whereas 1969 computers were definitely, definitely going to kill us. They just, even if it was because we just programmed them wrong and given them mission parameters that weren't logically internally consistent and they had to shoot people out of airlocks. That wasn't particularly Hal's fault. It was very sad. Daisy, Daisy. Skynet, on the other hand, Total Psychopath, just worked out that the best way to protect humanity from itself was to fire all the nuclear weapons and get rid of them. Slight side effect, everybody died except John Connor and a couple of his friends. But, yes, Skynet, that's going to try and kill us. Or, alternatively, you can have computers that we have a war with and then they decide that the best way to celebrate their victory over us is to combine our internal electrical supply with a form of nuclear fusion. And then use that to power themselves because we scorched the sky. And it's going, if you've got nuclear fusion, you probably don't need the human batteries and the simulated reality. Hands up, who believes we're living in a simulated reality, by the way? Yeah, no, it's billions to one that we're not. Somewhere, there is a complete simulation of an entire universe. And so, therefore, somewhere in that complete simulation of the entire universe, there is a complete simulation of the entire universe and so on and so on and so on until infinity. And so, therefore, the chances that we're actually on the outside in the real one, very, very slim indeed. I would like to call out some honorable mentions at this point. I don't want to give the impression that there has never been a Hollywood movie that has actually done a good job. Sometimes, somebody does do the research, they base the idea, the concept of their film, on a real kind of proper thing. And then they inexplicably put Bruce Willis in it and call it Die Hard 4. But Die Hard 4, so Die Hard 4 is based on the concept of hostile hackers holding a fire sale. This is a fire sale. So they're broadcasting this on every screen in the continental United States of America. The great, not-fine, role of the America. Progress has come to an end. All the vital technology that this nation holds dear all communications. The great thing is that all the presidents of the United States of America had to get a credit. So they're on IMDb as themselves in this movie, including Richard Nixon. We will not tire, we will not falter, and we will not fail. What am I getting in there? Thank you. Happy in the Bambus Day to everyone. That was creepy. I tried to find more Nixon. But yes, that whole concept of that movie is based on a technical white paper that was written for Wired magazine by John Carlin called A Farewell to Arms. And you can still, you can go to Wired's archives and download that. And he describes this idea that because the whole of the world's infrastructure now is controlled by computers, if you can gain control of those computers, you can shut everything down. So yes, just weird that it was the way that they prevented that from happening was getting John McLean to take his shirt off for a fourth time and go and get very dirty and covered in blood and take his shoes off. And I think that's the one where he ends up shooting himself through the shoulder to get the person behind him to stop trying to kill him. But no, so honorable mention, Die Hard 4. Another honorable mention for a movie that doesn't actually exist. Everybody knows there is only one Matrix movie and they never made any sequels. They would have been great if they had made sequels, but they didn't. They never happened. The Matrix sequels never happened. That thing after choir practice never happened. Windows Vista never happened. There are just some things that are too horrible. But if the Matrix Reloaded had happened, it would maybe possibly have had a scene where one of the characters actually uses Nmap to do a port scan of a system that she's trying to gain access to. So yes, this is actually what this would look like. Not so much him with his green things where apparently he can see a blonde woman in a red dress. But yeah, she is hitting enter and not escape. So clearly she knows what she's doing. I never quite understand why when the electricity goes out all the lights go off a block at a time instead of all in one go. Never figured that one out. So yes, honorable mention though there because that was actually what you would use to do the thing that she was doing. In the simulated reality that's been used to enslave humanity and treat them as batteries. However, I would like to put in a really, really dishonorable mention right at the end. This is something that I didn't manage to get a clip of for the last time I did this in NDC London at the start of this year, but a really, really dishonorable mention. So CSI where we earlier saw the clip of the woman who was going to use Visual Basic to create a GUI to track down the hackers IP address. They decided that they'd done such a bang up job of talking about cyber warfare and hacking and everything else that that deserved its whole entire spin off where that was all that happened. Okay. And so you think, well, so therefore if you're going to actually create a show called CSI cyber, which is about hacking and computer crime and the fight against computer crime. Surely, surely now you're going to do your research. Yeah. You think you're going to get this right. You're going to get a console. I will turn up. I will not charge very much money. Okay. I will charge you $10,000 an episode and I will turn up and I will help you get this right. But you can probably find someone much smarter than me who knows much more about this sort of stuff who can turn up and help you with that. And that would be why because you have that consultant you would end up with a scene like this. I got his green code here. Oh, no, it's all right. Yeah, no, there's nothing wrong here. All this code is green. You struck out here too. Oh, no. No, some of it's gone really bad. And then that happens. It's like everything from the whole of the previous just everything from the last 40 minutes just condensed into one 18 second clip from quite possibly the worst television series that I have ever seen in my entire life. It is just you can watch their 45 minutes in episode 24 episodes a season. They're into their second season. It's been renewed for a third and it is literally just end to end stupid. It is a fantastic way to just go. Maybe I am epic. Maybe I am a clever person who actually knows a lot about computers because this is what happens when you don't. So. This is a very intensive talk to prepare and that means it's now finished. Thank you very much for coming. I hope you've enjoyed that. I hope you've enjoyed the conference on by the way. If you were hoping for that weird science movie to make an appearance, there is not any part of that movie that I could show in a public forum without violating the code of conduct and just your general sense of human decency. So I apologize but do feel free to stream it on YouTube. The whole thing is on there. Thank you very much. I hope you've enjoyed it. I hope you enjoyed the conference. I hope to see you next year. Thank you.
Hooray for Hollywood! Nothing has done more to educate the public about technology and computers than the silver screen. Come on a ride down the boulevard of dreams, where you can learn how hackers hack; why artificial intelligence wants to kill us all; and what happens when a self-replicating trojan virus worm breaks through a 256-bit firewall. I promise to start on time this time.
10.5446/51719 (DOI)
I think it should be working. So hello and welcome to my session about a language called Go. I'm really happy to see you. I think, well, I hope the coffee is good. I've had just one so far. Who here is enjoying NDC for their first time? Me as well. It's my first time at NDC in Oslo. It's my second time in Oslo because I have a friend living here. So it's amazing to be here. It's a lovely city. Well, the weather is very generous to me as well, again, which is also amazing. And this talk is about a funny language called Go funny. As in, it brings fun. It's not nothing to be joked about because you can do very serious things with it. If you have any questions, of course, well, you know, try to raise your hands. I'll try to notice you and I'll try to repeat the question that you have and maybe even answer. So starting with a bit about myself. Well, you could describe me as a Java developer in a dotnet and a primarily in the dotnet conference talking about totally a non-dotnet and not JVM language, a bit out of place. But, well, to give myself a bit of a background, I come from Kraków in Poland. I now live in London. I run a conference called Geekon. Also have been involved in the community as in running the Polish Java user group doing, well, founded Kraków Hadoop user group, which now is called Data Kraków because, well, Data Science is much more popular. Apart from, well, interested in all the usual software engineering things that you would expect for example, one thing that needs to be mentioned is Software Craftsmanship Kraków. It's a group which you would expect to be about testing and software craftsmanship and we founded that in 2010. It's actually a group in which we read computer science papers. So every two weeks, we pick a paper. Well, people read it before and then we come in, meet and discuss it for give or take two hours. So it's a very well different thing to what people usually connect with with Software Craftsmanship, but I think it's worth to be noted because we've had over 100 meetups so far. Anyway, me, described in, trying to describe myself in a single sentence as a developer going deeper. So if something works, I'm usually suspicious. Why is it working? This shouldn't be like that. It shouldn't compile, which means I like to go deeper and deeper and deeper and deeper. Well, with all of the time that it requires. One disclaimer, my opinions are my own. You can find it on LinkedIn. Who do I work for? But this talk is not connected to my employer in any way, of course, because they are very brand protective, which means they don't want me to mention their name. That's their choice. If you have any questions, shoot or try to find me after the talk. If you have any questions during the talk and you would like to raise your hand, I would love to answer. So, GoalLang. It's a free and open source language. As in free beer, you can well download it from the internet and use it and you don't have, as of now, at least I haven't heard that, you don't have to worry about a large database company suing you for using that. So, it's open source, so, again, you don't have to worry. I haven't seen a legal department that would give people problems because of their attempts to use Goal. It's BSD license, so that should be okay in most places. Of course, there might be some extremes. I'm not a lawyer. It comes from Google, so it has been invented. Google uses it. At least that's what they say because they've never worked for Google, so I have no idea what does it look like internally. But it comes from there to solve a couple of problems. Who here has written some C or C++ in their lives? Yeah. There are some, let's call them inefficiencies or problems or minds or, well, 10,000 ways to shoot yourself in the foot very effectively blowing the whole leg and half your hip off in those languages. Goal is nice, as in it allows you to save yourself from some of the problems. It takes some of the power away, and at first I thought that it's very limiting. That was my first attempt. It feels like I can't do all the things that I used to be able to. Am I really going to be able to be productive in this language? Then it turned out, I'm not really missing all those features that much because I can still get the job done, and usually there are less bugs and there's less code. Go is fast, as in the whole. One of the key goals in creating that language has been speed. Speed in compilation. You might have seen compilation processes for large C code bases that can take a day or at least a couple of hours. Go, I haven't seen really long go compile times. Go compiles really, really fast. The whole tool chain is very fast, which means you can have very rapid development cycles, which means you get feedback often. You can run your tests. You don't have to wait a day for a software deployment manager and their team to try to attempt to configure the thing and run CMake, DMake, your flavor of make and wait for it to happen. Go has a mascot as every single, well, self-worthy open source project. It needs some graphic designers to help it. So for a go, go for is the mascot. And if you're already asleep, you're thinking about lunch or some of the awesome stuff that you can see in the exhibition hall. I really loved the VR experience there. It's a mix between C++ and Ruby because of the syntax, because of the power and because of how low level go gets. So if you're already asleep or getting the play with it tonight, I really encourage you to. It's super easy to start with. You just need to download the tool chain and then you can start. You don't need to download a specific operating system, a specific IDE, a specific something. Just the go distribution for your machine of choice and a text editor and you're good to go. So, well, I always get questions about who uses go. What companies use go? Can I use some big brands to encourage my place to use it? Well, Google obviously as the inventors of the language, well, eBay, for some of, that might be a good thing, for some of you it might be a bad. Dropbox, well, Bitbucket now, Atlassian, I think. Government okay. So that's a government entity. They use go the language and the great thing about UK gov is that some of their go usage is open source. And I can only say some because I don't know of all of their go usage, but government UK open source is a lot of their currently written software, which means you can actually, this is good enough for the UK government. This is the link to GitHub. Have a look. It's serious stuff running on go used by a government. That's a very good political argument for some of the discussions. Talker, of course, some other things. The Wiki address has a much longer list of users. There is a lot of companies in London which use go and experiment with go. I can give you a short story about a company that just happens to be in my area called Not on the High Street. I went there, I think a year ago to do a talk about go the language because some of the developers were curious and just a week ago I learned that I already started using that in production. That was nice. To fill that story completely, how did I learn about go? Well, I heard that Google has open sourced a language, created this language and I was just curious to start. I couldn't find a reason. And then we were doing effectively a router, an HTTP router in Java. Which means if you're doing any software in Java, it tends to grow into thousands of lines. So our software had, I think, well, 10,000 of lines plus a couple of hundred of XML lines plus, of course, you had to use Tomcat and other things. And at one point I thought maybe we should take a step back. Maybe something is a bit wrong in here because the amount of dependencies, the amount of things or crap that you need to install to get it to essentially route HTTP requests and serve very simple things, it feels a bit too high. So we rewrote that into go and we ended up with around one-eighth of the code. And we got a single binary and that was it. And that felt really awkward because, well, at least me coming from Java ecosystem, I'm used to bloated software sometimes in the verbosity. And then in this language, well, this is everything. So we've shown it to other people, well, okay, show me the code. That's readable. That looks like Ruby. I don't really understand all of the tiny details. But those are details. Where is the rest of it? No, no, no. That's all. That is impossible. Because as I said, it was around one-eighth of the code and that was our first, well, serious project that we did for eBay in 2013 in code language. And that's, well, then we started having go as a regular piece in our tool chain. I do not encourage you to rewrite all your software into go. That's never the solution. It's a great language for some usages. Well, for some usages, it's not perfect for everything because I still believe there is no silver bullet which obviously translates into I'm an engineer, not a consultant because then I would try to sell you mine. Where do I use it? All kinds of network-related middleware. Yes. All kinds of anything that needs to deal with sockets, well, HTTP traffic, analyzed root, do something with that, go is super pleasant to use it, to use that. I know that there is a lot of companies that use go for microservices that wouldn't have been my first choice because I personally prefer languages with more, well, with a richer type system because then you can reuse the domain language that you will create in one application in the other, which means I would probably tend to favor Scala right now. But there is a lot of go usage in microservices space and people are very happy with that because the applications in go, you can create them quickly and you can run them quickly and they usually behave quite well, which means a lot of the overhead that I mentioned you don't have to go through. And I mentioned Scala because I like Scala the language at first it helps because learning curve is quite steep. Go has a much, much nicer, much more approachable learning curve. Just to answer the question that usually happens. Where not? Well, user interfaces, web applications, well, you can't write them in go. I haven't seen a go to JavaScript transpiler just yet. It doesn't mean that there doesn't exist one. It's only that I haven't seen it. It's a back end development language. It's really good there. But I've been talking for a couple of minutes now and I think it's time for, well, we have to go through some code. So, Mandatory, hello world, because of course we have to, let's make it even larger. Yay. So, you have, well, a package declaration. That's obvious what it does. We import FMT. So, we import a namespace of another package that we will be using a function from. And then we have, well, we use a function from that. You can notice that it starts with an uppercase. That's a convention in go. Anything that you want to export within a package has to have a start with an uppercase. Let's try to run it. So, go, build, main. That's done. And go run main. Oh, sorry. Yay. That was fast. So, why is main not uppercase? Because main, that's a really good question. That's a catch. Main is this one special thing that you start your applications with. If you notice any similarity with a usual main function that's prevalent, that's present in many other languages, that's exactly your intuition guides you well. Main is this one special package in your source tree that is supposed to contain the entry point for your application. So, if we look at the directory structure here, well, there is bin, there is PKG, there is SRC, under SRC, well, there is, you can have different packages. And if you have any dependencies, they will end up here, but we'll discuss, I'll talk about it later. If you want to create an application, the default would be that it would sit in a package called main, and all of the other stuff should be your dependencies. So, if you want to have a number of applications, well, you will have a number of projects with different main directories, and all of the other things will be your dependencies, which encourages you to externalize and then think about how you structure your software. So, just a convention. So, we've seen a hello world. Why do I like go? Coming back to that point, well, no runtime dependencies. What do I mean by that? I mean that I get, oops, I think I want to be here. So, this main, this binary has everything that's needed. It only prints a hello world, and somebody could say that it's a bit large that it takes two megabytes to just print nothing, well, almost nothing. Just a hello and you see this very important message, but then it takes two megabytes. The answer is go links everything statically. So, you can take the binary, you can deploy it to all of your servers, and you don't have to worry about missing dependencies. Everything is contained there, which means your release management becomes trivial because you just need to well take a binary and deploy it to a number of machines. Your dependency, your library management becomes super easy, super pleasant. Your operations people will probably most probably be very satisfied with such an approach because that will, that gives you simplicity and simplicity is what I really like in software. So, deployment, as we said, super easy, much more pleasant than C because we don't have to produce a lot of the artifacts that we would sometimes have to worry cache for the compiler to optimize our enterprise completion process. And you've seen that I've compiled this tiny amount of code using a go command, and I mentioned something called the go tool chain. And now let's talk about the tool chain. Let's give it a bit of a look. So, if I type go, I will see that go is a tool for managing go source code. Yay. The commands are, so the built in, the built in commands, the things that are built into the basic tool chain, the thing that you download from the Internet, so anybody using go language, anybody being able to compile, you can safely assume they can build. They can show documentation for package or symbols. They can format and run a formatter. You can also generate. You can install dependencies. You can download dependencies. You can list packages. You can run. You can test. You can print your current version because obviously you'll need to do that. Being able to get to manage dependencies, being able to run your tests out of the box, that's usually a pretty good start. No other things are necessary. That's all built in. To be fair, what will do tests look like? Well tests are just simple programs that are supposed to have an exit code of zero in any decent software, piece of software. Well, that's a convention that they follow. But then they allow you, they already give you something that you can test your source, your code with. You can build and you can manage dependencies, which means you can specify dependencies that sit, for example, on GitHub and this tool can download. And we'll see that a bit later. So the go command. We've seen that in action. And I mentioned go format. Authors of go decided that there are certain discussions in the software industry not really worth having in most of the context. One of them is formatting. So in terms of go, tabs versus spaces doesn't exist. I know it's a great flamer for having a beer over, but with the go language you are supposed to be using tabs and all break lines or new lines characters will be converted into semi-colon so you don't have to use them. So the way you place your returns actually matters. That also means that every single piece of go software should look the same. It's encouraged, it's highly encouraged to run go format after every save in your editor and it's a very easy thing to configure. That also means that analyzing software becomes a much easier job because you can make certain assumptions about what does the source code look like. How is it shaped? Because everybody's source is shaped the same. And that also means that when you're reading somebody else's go you don't have to why is this curly brace sitting in a new line and there is a comment in between. No, no, no. There is one way to do that and everybody follows that. This is more or less what it is. I'm saying more or less because how Keynote decided to format my text is a different story. But another thing, another great thing that I like about go is that it's very explicit about the types it uses. This is not supposed to give you an introduction. How do I start with go with absolutely everything because I assume you are very intelligent people. You can read, you can follow the tutorials online on your own. I wanted to tell you why do I find go as a very good and very pleasant language and what features actually make me still like coding it in it after three years. I did the first proper commercial project in it. So the types have the width, their size in it. So you don't have to worry or wonder how many bytes does an int take. No, that's in the name. Is it a signed, is it an unsigned, that's in the name. What is it? No, it's all built in. For those of you who still program and see that might not be a problem but for people, for example, coming from Java or some other languages, that's a problem. If there is an architecture change happening in between or you're programming to a different set of devices, well, an int 32 is always an int 32. This is what it is. This is how wide it is. You don't have to worry whether the default for that platform is 16, 14, 32 or something else. That problem is solved. Which means your interfaces, especially for something that deals with bytes and some really low level funny bits are very explicit, are easy to follow and are easy to understand. A bit of syntax, well, you can declare a variable and you can assign to it or you can declare and assign at the same time and you don't have to specify types sometimes because go compiler can infer that, well, another poke at Java which still can't figure out what type of things is, even though it knows very well. If you want to print a length of string, you do, you call a function len with the arguments of the string that you want length of. Which means string is unlike some other languages, string is not an object here. So it won't or a class, it won't have methods like you would expect it to. Which is just the way it decided to. It has some other consequences but we'll see them in a moment. And now let's have a look at maps because one maps here have quite a nice feature which is, which we'll see in a moment. So I declare a map which is a map of from string to bull, nothing fancy, I initialize it with two values. And then I try to, in this very line, I try to access a key that's not set in there. And instead of getting a null like some other, some languages would give you, you get false. And false is a zero value for bullion. So there is a concept of zero values or default values which, what is the initial value of a variable if you didn't bother or you didn't declare it. That means that all variables unless declared otherwise, you know what to expect. For ints that would be zero, for flows that would be zero, for strings that would be an empty string, for larger structs that would be a null. But it also gives you this tiny little perk that if you want to access a key inside the map, you don't have to check like so many times we have whether it exists in the map and then if it doesn't insert it and then set it to a value. No, you can assume it exists and then operate it on it as if. Which means if you want to have a map with counters, you can just say map of the key and just increment it. Because go will make sure that if it wasn't there and you access that, that's a zero, you don't need to store that. But if you actually change that, sure, it will be there, store that one as you would expect. Which saves you a lot of lines across many, many projects. Of course, I know it's a tiny detail but it's really helpful. And then structs. Because go uses structs, it doesn't have classes which have like Java, like Scala, like probably C sharp classes. They decided to go with structs as in constructs that have, that are just a list of fields. And you can initialize them this way. So just pass the arguments or you can use named arguments and then the order doesn't matter. For a lot of reasons, having named arguments is really helpful because it just helps readability, especially in the public sections of your APIs. And then you can call methods. Well, you can cast a variable to an interface and then you can call methods but let's see interfaces. So we have a type that is called square that has just one variable inside called side. And we declared that there is a function that operates on squares that's called area. And it will return an in 32. Which, well, the area for that is quite obvious how it works. So we define a square and we can call area on the square. And now the question would become why? How is it that it works that way? Well. So the authors of Go decided that knowing what other people, how other people are going to use your code and having them to explicitly be able to say that import or extends or everything is a bit of a nuisance. And maybe not nuisance but it's tiring and it brings complexity into the world. And you by definition cannot foresee all of the possible usages of your type. So instead of restricting and formalizing and maybe putting that into concrete, putting your type hierarchy into concrete, you get structural typing. So if your class has methods that much type that somebody else declared, it means your class fulfills some other type. Which means if there is, I don't know, a Martian square that somebody has designed that just meets the square method, I can cast, well, me or they can cast my square into their square because the methods match. But that's it. That seems really awkward at first because how is it going to work? But then it turns out to be an extremely powerful feature of Go the language because it means that you get to use I.O. You get to use a lot of the libraries that the authors couldn't have foreseen what they can be used for just because the types match you can cast and then you can use them in your program. And that's a really, really, really nice feature. So where is it useful? As I said, I.O., yes. All of your domain class hierarchies. If your domain class has one function, another function, another function, it means that set of functions forms an interface. It means you can use that type as something specific. Which means if your domain class hierarchies are getting larger and larger and somebody wants to use it in a very peculiar or specific way, they usually can and they don't have to cast. They do have to cast it, but they don't have to introduce special marker interfaces into the whole type hierarchies. Which for languages like Java, Scala, anything else leads to, that forces ugly less and less specific, more and more generic hierarchies. So if you ever looked at what a particular array list or a particular hash map looks like, what type those extend, in Go you don't have to worry about that because it's only about the methods that you declare. If the signature fits and if the list of methods fits another interface, you can use one type as the other and that's it. That's helpful. And now let's look at some network programming because my person speaking here previously has talked about network and using network quite a lot. So let's have a look at an echo server. Well, this is how much code you need for an echo server. Of course it's a sample, so it will be short by definition. But this is it. You expose a listen address, you create a TCP listener, you accept connection and then, well, this is just a server that will copy whatever you sent to it and reply with the same. Quite silly, but just to show you how much code is needed. Because also it shows you another feature of Go language. You can see that when I try to listen, I get a channel and I get a second thing that's ERR. So the error thing is the error. If error is nil, that means that nothing bad happened, but if error wasn't nil, that means that something wrong happened and I have, well, I can examine. It's not nil, it will be the object that describes the unexpected situation. There is no exception. It's not that the flow of your program will be seriously altered, something will explode here and then you have an exception couple of stack traces below or to the left. Because why not? No, it's right in the same place because methods can return multiple things at the same time. So a TCP server, yay, good. Maybe we should actually give it a try because there is a something peculiar about that server that I wanted you to have a look at. So. Let's try to print. Now we have an echo server running on port 4,000, so we'll just... That's what is expected. So you already see the problem. It's not really concurrent. So, well, how can we make it concurrent? Well, first by removing the format, we will build it. Oh. And now you see something about Go and its approach to coding. You cannot have... This way. So I imported and I haven't used package called FMT. Go is extremely strict about leaving and letting you have unused variables or imports in your code. It doesn't allow them. Just that. Which means if you're debugging with print statements because, well, that's still my most frequent way of debugging simple things, it will be a bit painful. But if you... In the large code bases that you usually work with, how many... How much of the source code is left unused? I don't know, but there is probably some. Go will try to eradicate and get rid of every single thing that it can safely say that hasn't been... And is not accessible in any way. Which is a really, really useful feature if you think about that in the context of, well, it's done at Google, which means the amount of software written is huge, which means you want to decrease... You want to remove all of the cognitive load that's needed to read the software that nobody ever actually runs. Because why would you waste your time on that? But we talked about an Echo server. Well, that's... So does the concurrent one? Does the normal one? Does the concurrent one? Does the normal one? Well, for people who didn't notice that, I just added this Go line here. What happens? What is the Go line? What does the Go instruction do? This way, without the Go instruction, I'm just calling a function called copy. Like you would, nothing magical happens. Single thread, no concurrency whatsoever. Whereas when I have Go, I actually run it in a Go routine. So I make that function be called concurrently and then this flow can continue. And where that will happen is something that goes runtime will manage. So Go routines, lightweight threads. When you spawn them, the idea is that you can spawn as many Go routines as you want. As in they are extremely light, their stack will be light. There is no one-to-one mapping between a Go routine and an operating system thread. So you don't have to worry that if you start anything more than, I don't know, eight on a desktop, only eight will make sense. No, this is not how you are supposed to think about that because Go routines are how you structure your software. And that's quite orthogonal to how your hardware, underlying hardware is going to run it. And you shouldn't have to worry about the hardware for most of the cases because in most cases you are there to solve a problem, which means express it with code and be able to run it. Go runtime will take care of that. That's, again, a normal function call without the Go instruction. And that's deferring it, that's running it in a separate Go routine. First question that comes to mind, well, how many threads will run? That's something that actually bit me and bit us at eBay at the very beginning of our history of Go. There is an environment variable called Go Max Brox, or there is a runtime parameter that you can observe and react to and change, which tells you how many concurrent Go routines can operate. It's usually useful to set it to something like your current number of hardware CPUs or cores because anything higher will obviously have the obvious problems. But that's something worth mentioning because it's biting so many people and it has and it will, because by default this value is set to one, which means only one can run at a time. So we can spawn functions that will operate on a different thread. Awesome. How do you make them talk together? Because that's only, well, that this is the necessary component. The answer to that is channels. Channels, as the concept come from a very important paper called CSP or communicating sequential processes by Mr. Hoar. I would think that some of you at least have heard. If you have some spare time, if you would like to talk computer science with your friends, try to read this paper, try to discuss it, go through it because it's a really awesome paper. It's so much different to some of the concepts that we primarily use in most languages popular right now. And in current world that's going more and more multi-core, multi-processes, multi-threaded. It makes things easier. And by making things easier it reduces complexity. So we don't have to worry about how things are orchestrated. Runtime will support that. You only have to make your implementation look nice. So Go uses a concept called channels. Channels are bits of memory. They are used to communicate between go routines. They have types. So you don't have to worry about what comes in or what comes out. Type safety is there. And with type safety because we talked about types, we know what they look like in memory because we know the width, we know how they, we know how they, well, exactly what they look like. We also can push other data into those channels from different applications if we agree on the byte order. And channels are also thread safe. So you don't have to do manual synchronization around using channels because channels will under the hood provide you with thread safety. You don't have to worry about explosions. You make a channel by, well, this instruction. That's, you make a channel of a variable. Then we have a channel. There is, there are two types of channels in Go. One type is called buffered. The other one is called unbuffered. One is unbuffered. Unbuffered channels. So channels that don't have a buffer, which means, well, they are unspecified in their size. They are synchronous and they will wait when you try to read from an empty one. Basic syntax, well, you've seen how you make a channel. You push things into a channel with this funny arrow operator or you read from a channel and you can assign it to a variable. And you can see, you can also get this extra variable whether something was actually there. Because if you were reading from an empty channel, you will get the zero variable, the zero variable chest as I described it with maps. But if you'd like to have a distinction whether the zero that you read was because the channel was empty or because that's actually what was in the channel, you can use that extra variable there. Some bits of code. You, of course, have to pass channel as an argument to a function because, well, it has to be able to communicate. This simple piece of code, well, this piece of code will just put a random integer into the channel and then another function will try to, will read it and then we'll reply with, well, received something. Absolutely trivial, nothing fancy in here. Just the basics so that you know more or less how it works. And as I said, that's not cool yet. The cool thing about channels apart from, well, passing variables and passing data all the time is that you can use them to coordinate. And you can, well, what you can do is make a channel and then have a function that will close that channel or write anything into that and another function will try to read from a channel, which means the function that tries to read from an empty channel will have to wait and it will wait and it will wait and it will wait and then when you close or write anything that doesn't, it doesn't have to make any sense, then that will progress, which means having, well, coordinating two functions becomes super easy. If you share that channel across many functions, then all of the functions that are waiting for a channel when you close it will be able to progress. How many, you can of course do it with mutixes, but why? Something called a latch, you can also express with channels, of course, there is some work that the worker is going to work on and so on and so on and so on. What can you imagine a latch with? You know what a horse race looks like. You know how it starts, there is this very fancy gate to which all of the horses have to enter and then it's all released and then all of the horses start at once. This is the same concept actually in here. So you spawn a number of workers, you spawn, well, as many of them as you want and they will do the work after they manage to read something from the latch. So once that is closed or they can read something from here, well, they will be able to progress. When is it important? When is that fun? When you want to, I don't know, each of the workers that you prepare has to read some configuration, how it has to be able to, well, initialize themselves in a certain way, maybe, I don't know, maybe make a TCP connection, maybe make an SSH connection, make, I don't know, it really depends on your use case, but this is a very, very frequent construct in many of the servers that we've written. It helps and the fact that it's only, I don't know how many lines, but not too many. How many lines would that be in your language? I can hear some whispers, I don't know what they say, but, of course, it depends on the language. That's a very good answer, but if you were to try to code it in Java, that would be a lot, as in a lot. It certainly wouldn't fit on the screen in a readable way. I can promise you that. Generators, another nice thing that Go has, you probably have used an SQL database, you probably heard of sequences or a way to have, well, sequential or non-sequential, but unique numbers in a piece of software. How can you make that in Go? It's, again, trivial. So if you need some uniqueness, if you need something to govern over that, well, you can use a Go routine. And the good thing is you use this funny for construct, so you write the current version of the counter or your randomly generated version of the value into the channel and then, well, generate another one. The great thing is it's lazy. So it only actually, this is only executed when the value is needed, which means you don't have to worry about pre-generating all of the random values in the world or about the sizing, no, no. This Go routine will only be called upon when it's needed, which means having a sequential number generator that can be used across all of your Go routines on a single machine becomes very easy. Of course, the problem becomes very complicated if you want to have a number of machines or a whole cluster or a number of clusters in different data centers, but this problem is beyond, this problem spans many run times, which means it's beyond what a programming language should solve by default. But if you're still on a single machine, this gives you a generator. You can, of course, do other stuff in here. Also, if you want to have a single, I don't know, file interface or a single database connection, you can use that construct as well. Multiple channels at once, because, well, it is good we've read from one channel, but what if there is a situation in which I want to read from a channel, well, maybe there is this connection that will give me something, maybe there is this connection that will give me something, how can I express that? Well, Go has a select statement, which will just, it will advance and it will execute the code that's associated with whichever case happens first. So you don't have to worry about expressing that with ifs or doing nested loops with, or while, or, well, yeah, looping and checking if something is available in the channel over and over, it's available as a construct, which means you don't have to worry. Providing a default, there is a construct for that, and there is a timeout built in. How many times have you written a read from a socket or from somewhere, and then forgotten about the timeout? I have done that many, many times. This still doesn't solve that problem, because if I forget to put that in, I will forget, but it's easy to specify it and it's built into the construct, so there is nothing extra that I have to use that will be taken care of and that will be guarded by the runtime built in. Services. So if assuming your software has, you're doing a microservice based application or just a service based application and you want to create some services and the service has to stay available, but then comes the problem of how do we shut down in a graceful way? Because that's a nice way to shut things down, especially if you have connections to other services so that you don't leave them hanging, so that things don't have to timeout, so that you don't waste resources. Well, you can create a service and, well, spawn it and wait for it to do all the work, and then again, you can use the channel to finish all the work, to signal that you have, well, either completed your work or you're ready to die, which means you can use a quitting kind of notification like in here. So make a channel, well, do the work for three days or any amount of time that you need. You do the cleanup and then you, well, you might report, this is, well, just sample so it returns, it prints out to system out that you're done with the cleanup and then you quit. And then when you have quit and the cleanup has happened, well, your governing piece of code can finish, exit in a nice way and you're done. Buffered channels, because I mentioned that there are unbuffered channels, we skip the buffered ones for a reason. They are asynchronous. They will return a zero element when empty and they will only wait when they are full. So a buffered channel, a queue with a bounded size, your intuition guides you well. I've been talking about channels for more or less 15 minutes. Somebody would ask, are channels the ultimate gold, well, hammer, the silver, the golden bullet, whichever mythology you want to follow? And of course, well, the fact that you can express a lot of the problems with channels doesn't mean that you should. So Go also has the usual constructs, right? Like mutexes, like locks, like atomic operations. And those are screenshots from Go documentation that's available online about packages, about sync and sync atomic packages. So those functions, those instructions are built into the, well, already available to you. You don't have to write them yourself, which means you don't have to create a double nested locking or some other idioms that you hope that they work, but they really don't. It's all, they're available for you. And last thing, if memory serves me well, are ranges. So a way of operating over channels or slices, which is a go way of sync arrays from a programmatic point of view. And the thing is, you can treat both in a single way. So you can treat a slice or you can treat a channel similarly, and you can just treat it, well, put that into the range instruction so you can do a for loop over each. And then you don't have to learn two different syntaxes because those things are look the same and operate the same, which again makes your software nicer to read, makes everything, well, makes life better. Packages, well, I mentioned that Go has packages when I showed you the package declaration. When I see what's available, I will see that there is a bin directory, there is a package directory that will also generate things for specific architectures, which should lead you to a question, what about other architectures? What is the architecture support model with Go? Well, by default, if I compile a program, a piece of software on a Mac, I can put it, well, give it to a different Mac and it will run just fine. Of course it will not run on the Windows, of course it will not run on a Linux, but if I compile it on a Linux, I can get that binary because, again, static linking and ship it to a bunch of other Linux boxes and it will work, which is quite amazing. Can I cross compile? Of course. Can I do other funny things while compile against sources that have some C integration? Yes, of course. And I mentioned, somebody even asked about packages and dependencies and I mentioned that Go already has a tool for that and I can put imports that pointed packages that hover somewhere in the Internet. And that's actually a very funny thing because it's not specified in the language. So there is, there is, how Go tools should resolve that, but it's assumed that Go tools will be able to resolve addresses like that. So if we go to here and we uncomment this B. And then we do go get and we hope that, oops, Google code doesn't like us anymore. Oh, well. What Go get would do is get the code from Google code if Google code would actually play with us today. It would download all the sources, it would download, well, the whole package. So if you're used to binary packages being shipped around with tools like Maven, with tools like Ant, with other tools of choices, then Go does not the case. What you will download is the source code. So if you want to examine how something has been written, you will get to see exactly that. What it also means, you will download the source code with tests, assuming the tests are there. So you can run their tests if you suspect a bug. If you want to see the documentation, documentation is part of the source code, it's embedded, which means you get to see it as well. Which is an absolutely amazing concept if you're in a company that values open source software and it makes working with proprietary libraries very awkward. But, well, luckily, in my history, I haven't had a situation so far that I had to connect to a very proprietary library without having access to source code in Go. Maybe because the language and the ecosystem are quite young, maybe the mentality has changed, maybe the other environments are more frequent to exhibit such behavior. So Go gets, well, we've tried to run it, it didn't work that well, so we'll skip that part. What looks bad? Well, if you want to have JSON structures and JSON kind of binding, the way that you can express that in Go looks a bit ugly to me. I'm not a huge fan of the syntax or because it connects your type definitions with how you should parse your data. It links two things together. It also means that the structure of your JSONs starts to live in your source code, which means you can't change one without the other, which is not the luckiest of choices. On the other hand, there are quite popular libraries that allow JSON and other format bindings for many languages out there, JavaScript, most probably.NET, which means, well, that's something people look for. Before you leave, before I say thank you and do you have any questions, if you wanted to learn Go, if you wanted to learn something more, of course, there is the excellent website which has a lot of tutorials. It has some of the tutorials you can actually write code and run them through your browser. But if you'd like to do it the classical way, get a book. This is the book I would recommend. It's written by, well, you can see the authors here, but I would say that it's quite amazing. It's very comprehensive. It's exactly as thick as your Kindle is or, well, I haven't seen the paper version because I don't buy them anymore, but this will be my book of choice. And last thing, what ID's can you write, should you write Go in? Well, pick your favorite text editor. You don't really need anything fancy. If you like Visual Studio Code, it has excellent Go support, including all of the additional tools that you can use like Linting, like import or automatic importing, like automatic formatting on save. You can have that all in Visual Studio Code. I started writing Go with Sublime and I like it. Well, I love Sublime for that. You can, of course, do it in Vim. You can do it in Atom. You can do it in most of the other ID's out there as well. Because the language is very simple because the syntax doesn't have any special cases. It's not the type system is, again, simple, which means creating tools doesn't look that complex. But then all of the complexity shouldn't be in the software in how it gets written because it's not the smartest people who write the code. It's about the problem that gets to be solved. So with that, I will ask you, well, try to download Go, have some fun, and Go code. And if you have any questions, just try to wave your hand. Yeah, there is a question. Yeah, so handling errors by looking at return codes, yes, the error variable that was returned from some of the instructions is if it's nil, if it's empty, that means that there is no error. If it's not nil, it should have the error on information in it. It's not error codes as in integers comparing that to things. You can use types. Well, there is a type system for your benefit here. But yeah, this is the approach here. How do you protect from errors that are not thought about or nil or inference exception or the equivalent or something like that? So how can I protect myself from errors that somebody didn't think about? So if I ignore it, well, my software will explode the same way other pieces of software will explode. Because, well, if it's the second parameter, somebody forces you to use it. Yeah. But I mean, let's say in Java, you would probably around some critical stuff have tried something and then catch everything and at least log it and try to be ready to process the next request or whatever if something goes wrong. So how would you handle that? Well, there is a way which I don't have that example in the slides, but you don't have to assume if there is a tiny error in one of the small go routines that something really not important that will crush your application. No, there is a way to protect yourself from that, of course, but it's just not in the slides. ID support, well, we've covered. Debugging is also possible. C integration, well, how many minutes do we have? Three. Is anybody interested in C integration? No. No. No. Good. So with that, I'll say thank you. Enjoy lunch, enjoy coffees, enjoy all the great sponsors and everything else in the conference. It was my pleasure to see you and, well, experience also and I'll be available if you want to have any more conversations. Thanks. There is this voting machine at the end. If you can please press the green button that is Aftertaste to start voting. Just do 1-0-0.
Or why you should only write an eighth of the code. You live and breathe http. Most of the things you do with a computer involve it. How many tiny little http-related utils have you already created? Is it time to stop? By no means, we all do it. I'd just like to encourage you to write the next one in Go. It's a fun little language, which feels like a cross-breed between C (pointers!) and ruby (concise, powerful syntax). But how much? Why, yet another language, but my perl/python/.. does all the things? Well, it does. But go has a few things that make it super sweet, is web-scale and real fun to use! Or at least come and listen how did we get rid of 87% of our codebase in a day :-)
10.5446/51720 (DOI)
All right. I'm on. Hey. Yay. Okay. All right. So this is where you're at. If you're in the wrong place, really, it's kind of tough at this point. If you want to sneak out, that's okay. You're not going to hurt my feelings. So we're going to talk about habits of highly effective JavaScript developers. Hashtag JS Habits. I usually just stick that up there so that if you guys want to tweet about, hey, that's a good point or no, the speaker's stupid, whatever you want to do, it's fine. Just make sure you hashtag it so that I can kind of keep track of what's going on. So before I go anywhere farther, this has got to be my favorite thing ever because this completely sums up how most of us feel about developing in JavaScript. So I just saw that downstairs. These guys are great. The TechJS guys are great, but that perfectly sums up what we're talking about here. So really quick, before we go anywhere, let me ask a couple of questions. First question, how many of you are pure 100% JavaScript developers? I have three people in the room. That's normal. Three might actually be on the high side, right? Okay. So how many of you would classify yourselves as.NET developers? Okay, I'm not even going to count. That's like almost everybody, right? Java developers, so I have Java developers, yay. Ruby? None. Unless you're behind the light because then I can't see you. PHP, WordPress, where's my WordPress people at? I got a couple. Sweet. Okay. I used to make fun of WordPress until the company I worked for did a WordPress project, like the last project, and it was amazing to me how much they got done in such a very short period of time. Okay, anyway. But this almost kind of sums up what we're going to talk about in this talk because almost none of you, except for three of you, and four, because I do, classify yourselves as a JavaScript developer. You're something else, and you do JavaScript sometimes. And we'll talk about that. That's kind of the point of this whole talk. We'll talk about what that means and what that looks like. So this is me just real, real quick. I'm John Mills. I'm a consultant at a company called Page Technologies. I'm in Kansas City, Missouri. So I'm on like the other side of the world. But Oslo's awesome, so I don't know if you like me training or something. Let me know because I would love to come back. No pressure. Don't worry about it. So I write JavaScript. I am a mean stack developer, so I do Node.js on the back end. I do Angular on the front end. MongoDB primarily. I'm also a Pluralsight author. I've got four or five courses out there. Okay. So I love this quote because, so excellence is what we repeatedly do. Excellence is not an act but a habit, right? So it's putting ourselves in this mindset of I want to be really good at what I do. I am going to intentionally make decisions every day that are going to take me closer to being very good at what I do instead of just, okay, we'll be all right. I don't want to just be all right. I want to be really good. And so a lot of times the problem is most of you are.NET developers. This is not a problem. That's not a problem. Don't get me wrong because most of you are not. That's fine for you to be.NET developers. But when we start doing JavaScript, what a lot of us do, and I have been guilty of this as well, is we go to Stack Overflow, we copy and paste, it works great. We move on and then we go back to the code that actually matters, which is our.NET code, right? Well, it's especially with React and Angular and all these things. More and more of this business logic is making its way onto the JavaScript side. And as we're writing more and more of our code in JavaScript and we're doing more and more things in JavaScript, we need to start taking our JavaScript more seriously than just, hey, it's our jQuery thing that we do, right? So let's, I didn't hit play on my timer. Okay. So that's what we're going to talk about. That's kind of where we're going. And what I'm going to do over the course of this talk is I'm just going to walk through some things that we should be doing. Hopefully, a lot of you are already doing them, but this will be a good reminder. Walk through some things that as you do them, you will become better and better and better at JavaScript. And you no longer feel like JavaScript is your doom, but something that actually kind of makes sense once you think about it. All right. So the first one is, know your code. So we talked about copying from Stack Overflow, you know, just back into something that works because we all do it. We do it in.NET too. But, and so here's what that means. So JavaScript is not your primary language, right? JavaScript is also very different. A lot of times what happens when we're doing JavaScript as a.NET developer is we try and make JavaScript.NET, right? You take the same paradigms and the same principles and the same thoughts that you have in.NET and you apply those to JavaScript. I think a lot of times when I go to a client and they do Angular, right? Angular is an MVC platform. Well,.NET has MVC too. And so they try and take.NET MVC and roll it into Angular MVC. And that doesn't work. Like, I'll do that, please. JavaScript is also helpful. And this is where the problems start, right? Because JavaScript, right, so, because JavaScript does things for you that you don't necessarily know what's doing. And unless you start to understand what JavaScript's doing behind the scenes. So JavaScript, I often times refer to JavaScript as kind of like your assistant who thinks you're busy and they don't want to bother you, right? So JavaScript is going to kind of take care of some stuff that you probably needed to know about but it doesn't want to bother you. It doesn't want to let you know. And we'll talk through what some of that looks like. So here's the thing. I'm going to give you, I'm going to dip your toe into the JavaScript world. And this is why we have that JavaScript is your doom impression. Semi-colons. Let me ask you a question. Show of hands. Are semi-colons required in JavaScript? Yes, sir. So if you believe semi-colons are required, raise your hand. I have four, five, six people. If you believe they are not required, raise your hands. Okay. If you are morally opposed to hand raising, raise your hand. I got a couple. See, that's, all right. Okay. So let me show you something that will help you understand why JavaScript is so incredibly painful to deal with. This is from the ECMAScript standard. It says this, certain ECMAScript statements must, that is a strong word, must be terminated with semi-colons. Are semi-colons required in JavaScript? Yes. However, for convenience, such semi-colons may be omitted. All right. Ladies and gentlemen, this is the problem with JavaScript. Right here, summed up in two slides. Right? They must, but you don't have to. So, but, I mean, seriously, this is, so we got to take some time. We'll talk through a little bit about what this means. But, so this, this comes down to, these situations are described by saying semi-colons are automatically inserted. Right? So, so JavaScript as your assistant looks at your code and says, well, he meant to put a semi-colon there. They just forgot. So I'm going to put it in for them. Right? Which seems very helpful. And it is. So that's cool. But here's where we go. And I'm not telling you that, I'll give you my opinion at the end of this little segment because I really don't have one. I don't care. But it's important to understand. Okay. So check out this line, this couple lines of code. Just shout it out. How many semi-colons do you believe get inserted automatically into this code? Four. Okay. So we start with the softball, which is good. Four semi-colons. Let's see. Yes. Four semi-colons is the correct answer. Okay. Right? Because what it does is it starts at the V. I'll tell you how this works. And then we can kind of go. It starts at the V and it works its way across. So var A equals 12. It hits a new line character. It doesn't put a semi-colon in there. But it sees another V. And so it says, hey, now on my next V, that V doesn't make sense in that context. They meant to put a semi-colon there. I'll put a semi-colon. So it's helpful. Right? It's just trying to help you out. All right. This one's a little bit more complicated. How many semi-colons get inserted here? See, the last one, everybody just started shouting out four. Four. So behind the 12, behind the 13, behind the A, and behind the console.log? Yes? No. Three. Wait a second. There is not one behind the var C equals B plus A. Because, check this out. It hits the A. So remember, I said it just kind of looks across. So it starts at the V. It works across to the A. It goes to the next line. It sees an open bracket. Is an open bracket, does that make sense for the next character after an A? Because A could be in a ray, right? So no semi-colon. It's just going to keep going. Now this is kind of dumb. I mean, you don't write code like this. Please. Don't do that, right? However, what about this? How many of you have used the iffy syntax before? Angular developers, you better raise your hand because we do that. So this is the same thing. No semi-colon after the A. Okay. Now, where this bites you and why I personally choose to do semi-colons? I personally choose to do semi-colons because where this bites you is when you're concatenating files. How many of you concatenate and minify your JavaScript? Minification actually wouldn't kill you if you did it after you concatenated. But if you just concatenate, what's going to happen is you've got one file that's not wrapped in an iffy and one that is wrapped in an iffy and you concatenate those things together if you didn't end one of the semi-colon, everything breaks. And actually what you'll see is iffy syntax with the semi-colon in front of it. Have you ever seen that? Just to protect them from potential things like that. Okay. Last one. And.NET developers, usually when I put up this next slide, I can hear an audible sigh of relief because the curly braces are in the correct spot. Right? So this is wrong. This is right because this is how.NET developers write their curly braces. There's a reason why JavaScript developers don't. There's a couple but we'll talk about one. How many semi-colons? Three. Okay. Wait. Where would you put three? After the return statement. Two. Okay. So who would have gotten that? That's, yeah, there's one right there. That's, is that where you meant? So, let's just say yes. Okay. Awesome. Right. It'll put a semi-colon in there. So a return statement, continue and a couple others or what's called restricted productions, which means no new line character is allowed. So it'll stick a semi-colon there. So every once in a while, if you're returning object literals, if you put the early brace on the next line, it'll drop a semi-colon there. It won't tell you. It's just trying to be helpful. Right? You meant to put it there. So that's fine. It'll do it. Okay. So here's what I do. I use semi-colons. And I use semi-colons in conjunction with a linter and we're not to linters yet. We'll talk about linters here in just a second. Mostly for this kind of reason. I just want to be reminded for two reasons. One, does.NET require semi-colons? Like actually it really does require semi-colons. Right? And so if you're a.NET developer and then you go and try to do Java development, don't change your paradigm in that. See, I just spent like five minutes earlier talking about changing your paradigm. And here I'm going to say just you use semi-colons over there. Use them over here too. All right. But we'll talk about linting here in a second. Okay. I'm going to give you one more. And then we'll move on to other stuff. But equality in a different way than how we, okay. So look at this real quick. Var x equals one. Var y equals string one. If x equals y, so what gets printed? Equal or not equal? Equal. Is that weird to anybody? Okay. Here's y. Because when you do double equals, obviously you can't compare an int and a string are not equal. Right? I mean, so JavaScript, because it's helpful, says that doesn't even make sense. That must be a mistake. Let me typecast one of these to the other one so that string one and string one equal each other. Right? So it does an automatic type conversion for you so that it knows what's going on. And so everybody shouted out equal so that, okay, what about this one? Same thing. Equal. Because is zero false? Yes. Because in JavaScript you don't really, we say true and false, but really it's like truthy and falsy because that's the way JavaScript is. And so falsy is anything like zero null undefined false. So zero and false are equal once you do the type conversion. Right? So how do we solve that? Triple equal. Exactly. So now I've got x triple equal y. Triple equal y says stop. Right? Don't do type conversion. Just tell me if these two things are the same or if they're not the same. Right? So just one last thing about equality. If person.name. How many of you have written code like this? Because we don't have true and false, right? We have truthy and falsy. So person.name is totally truthy because if something exists, it's truthy. So John exists. So this is going to write out exists, correct? Does this exist? Does person.doB exist? Yes. Does person.single exist? Well, it does. But it's falsy. Right? And see what's funny is I know this. Like I know how this works and I still do this because it's convenient and it's easy and we're like, and I have been caught. I actually did in my Pluralsight course. I talked about this and then one of the comments, somebody came back and said, you talked about this and then like two modules later, you did this. I was like, yes, I did. Okay. So you can't do this. Right? What do you do instead? All right. So right, you've got to do this or something along these lines. If person.name does not equal equal or type of does not equal equal undefined. Right? That is not cool. That is not like clean. But, you know, that's among the best ways. We'll talk about underscore in a while. Underscore.isNull is like the easiest. Okay. But this isn't even all. It's all I'm going to talk about. But this isn't even it. We've got hoisting. We've got expressions versus statements. We've got prototype. JavaScript is different. And we're not going to talk about that anymore. We're going to move on and talk about other stuff. But as a whole, as a Dynet developer moving into JavaScript, keep that in mind. And when you go to Stack Overflow, when something's not working, so we're talking about habit, we talk about becoming better at what we do. Don't just go to Stack Overflow, copy, paste something in and say, hey, it works and move on. Like, spend five minutes. Usually it's only five minutes. And type a couple other things around it. Plunker is awesome for this. Just go out to Plunker, type a couple things around it and say, oh, that's interesting. I didn't see how that, I'll play around with that a little bit. Okay. Now, we talked about equality. We talked about semicolons. We talked about all that stuff. And some of that is hard. It's a lot to keep track of when you're trying to get something out of the door, right? So, we'll talk linting. How many of you use a linter? Awesome. A JavaScript linter? Okay. All right. Just curious. So, this will kind of go through this one fairly quickly then. Which one? Just shout it out. I'm curious what will be the loudest. ESLint and JSLint are the two that I heard the most. Although I think only one said ESLint. He just did this and it was really loud. Okay. Which is totally cool. Okay. So, let's talk a little bit about linting and kind of what that looks like. So, JSLint was the first one to come out, right? So, this is Doug Crockford's. It's, you kind of, it's his way of doing JavaScript. If you do it his way and you like it, you drop it in. It just works. It's awesome. Some people didn't like to do it his way. So, then came about JS hint. I heard some JS hints. A couple. Okay. JS hint was a little bit more configurable. It was a little bit more, I can turn things on and off. It was nice. It was kind of cool. Some people weren't very happy with that. So, they wanted it to be even more configurable. They wanted it to be even bigger and better. So, ESLint came about, right? And then for some reason people decided there's too much configuration. There's too much stuff. So, then you have the standard JS. Standard JS is zero configuration. You can't configure it. And if you don't do it their way, then, and I don't do it their way. So, I don't use it, but I, they're valid if they do it your way. So, there's a whole bunch of options. I use JS hint. Let me show you real quick kind of what that looks like. So, I've got some code right here. Slide this over. Now, you can already see there's some problems here, right? We've talked about some stuff. We've got some problems. Give me two problems that you see right off the bat. I got a double equals. And, yeah, so, and the return statement, I'm missing some, some, like, colons. Okay. So, what you do is you come down here to command line. You type JS hint star. You hit enter and it's going to spit out a whole bunch of errors, right? So, that's a linter. It says, hey, guess what? On line three, which is my return statement, that's an error. I line broke when I shouldn't have line broke. That's, like, not helpful. It's helpful, but I don't like command line stuff. So, those of you who lint, how do you lint? So, if you're a JS linter, do you do command line? Do you do build tool? Do you do it in your editor? Kind of what do you do? Build tool. We're not, and so, actually, we're not going to talk about build tools because that's like a can of worms, but you can hook them up into your build tool. I use mine in gulp. So, I use gulp. So, I hook mine up in gulp. I also just have something built into my editor. So, this is brackets right here, and I can't make that bigger, so it's actually not too bad on the big screen. But right, I can just say, hey, look at that. So, return needs a semicolon, and that needs a triple equals, and I can save that and get rid of a bunch of them. I'm not going to take the time to get rid of all of them, but that's helpful. Now, one thing about JS, and this is going to be my mantra through this entire talk, is do what makes sense to you. So, on semicolons, right? I said, I don't care if you do them or not, just understand why you might do them. And same thing with linting. I like JS hint because I can configure it. And if you go to jshint.com, we'll talk about that in just a little bit, there are a ton of options that you can turn on. I mean, I'm just, we're not going to talk about them, I'm just going to scroll through them, right? There's a ton of options out there that say whether or not things happen. And so, I've got two set in linting, so I have this JS hint RC, I've got two things set. One is equal, equal, equal, because for some reason, a JS hint, that's not defaulted to true. I don't know why, but that's fine, I turned that on. I also have one called evil, which sounds ominous. And it's not, it's okay. So, that basically means, actually, I'm going to go back. I use document.write. Document.write uses eval. Who uses eval in your code, in your JavaScript code? That is exactly the right number of hands, because I see none. Because eval is horrible, because it takes a string and it just executes it as JavaScript. That is Troy Hunt would have a field day with that kind of stuff, right? There is zero security involved. And so, to turn it on, I have to set an operator that says evil, because it just is evil. It's just that bad. Okay. So, linting. Pick a linter. Use a linter. And actually, the thing I love most about linting is that it helps me understand the code, right? If I work through some of those options, I work through, you know, which ones I want to turn on, which ones I want to turn off, and I start to understand why I would turn one on and one off, you start to understand JavaScript a lot better, right? And for me, that's why I like it, because it's going to force me to really understand what I'm doing. Like, why is the eval thing called evil? All right, let's talk about that for a minute, right? Okay. Know your IDE. Okay. So, let's, this one's interesting. As a developer, I spend more time with my IDE than I do with my family. Think about that for a second, because it's true, right? So, I've got a wife, I've got three kids, I spend 10 to 12 hours a day in my editor. I spend two to three hours a day with my family. Like, it's not, I spend a little more time, I spend a lot more time. Know your IDE. How many of you, I don't even want to ask this question, it's going to make my brain hurt, but I know the answer already. How many of you use Visual Studio to do your JavaScript? Okay, that's not as bad as I thought it would be. Okay, how many of you do not use Visual Studio to do your JavaScript? Okay. No, code does not count, because code is not Visual Studio, it actually is Visual Studio, anyway. But you answer my next question. Okay. So yes, there is Visual Studio code. Visual Studio code is just an editor. It's not Visual Studio, but there are, unfortunately, many editors to pick from. So, we've got, so as I say them, raise your hand if you use them. So, code, I'll start with code, because I know I'll get, there you go. Okay. So, I've got a few Visual Studio code, we've got brackets. Not nearly enough people use brackets. Okay, because brackets is my personal favorite. Adam, oh, very nice. Okay. Sublime? Did I miss any? Ah, that's what I was waiting for. Okay. Right. Because if somebody uses Vim, they will make sure you know that they use Vim. And I totally set you up, I apologize. So, don't feel bad. But that's the thing, I've got a friend who uses Vim and like, I can't get through five minutes of a conversation without him letting me know he uses Vim. It's just kind of the way it is. But here's the thing. The thing, one of the things I love about JavaScript is that we have a lot of options. One of the things that a lot of people hate about JavaScript, so we have a lot of options, right? But what I would encourage you to do, brackets, Adam, Sublime, Vim, code, WebStorm. I missed WebStorm. How could I miss WebStorm? JetBrains is like a sponsor downstairs. WebStorm is awesome. I actually do like WebStorm. It's a little too IDE-ish for me. I'm not quite to the Vim camp, but I'm closer that way. Here's what you do. Let me show you this. I showed you earlier, this is my brackets. If you go and you download brackets from Adobe's website, it will look nothing like this. I like nothing like this. I have tabs across the top. I have icons down the side. I'm like black and weird pastel colors or neon colors that flash and do things. I've got JSHint installed as part of it. I've got Git integration. I can do snippets. This is my favorite JavaScript Angular people. I open up a new file. I type I-I-F-A, boom. I have configured my environment to be very, very easy for me to use. I do CL, boom. Oh, that doesn't even make any sense at all. But you know what I'm saying. Spend some time, unless you're Vim. If you're doing Vim, spend hours. If you're in brackets, spend 10 minutes building on to your environment. Set your environment up the way that you want it to be set up. And the reason why I don't say use brackets, right? What I want you to do, these are free. With the exception of WebStorm, they're all free. Download brackets, download Sublime, download Atom, spend an hour in each one of them. You will like one of them, right? You won't like all of them. Trust me. It will never be a situation like I don't know which one to pick. But it will be different for everybody. Some of you will like one, some of you will like another one. But spend a little bit of time finding an editor that works and makes you feel more comfortable than Visual Studio. Because Visual Studio is fantastic at C-Sharp. It is not fantastic at JavaScript. Unless it's gotten very much better in the last, like, six months, I've never really thought it was great. But not to Bashful Studio. I love it. But that's that. Okay. All tab. Strict mode. How many of you code in strict mode? How many of you know why? Right. That's okay. So for people watching this later, I got like half the hands in the room and then two. Right? And it's that because that's, we do strict mode. And we're always told to use strict mode. But we're not ever really certain, like, why we're supposed to use strict mode. So let me talk through a little bit about why. So here's, we'll go back to this statement. JavaScript is trying to be helpful. JavaScript is very nice. Right? It doesn't want to bother you. It knows you're busy. But don't let it help you. That's bad. Right? And here's why. So let's talk, we'll talk about two things. Let's talk about globals. All right. So we've got this thing. This is, so it's basic. It's very simple. I've got a function that says print. I want to print out the word hello. And it's going to say printing hello. Right? I'm not going to take the time to show you that that's what it's going to do. Just trust me. What's this line going to do? Console.log string to print. Outside of that. How many of you say that's going to print out printing hello again? Nobody. One person. What's it going to print out? Anybody? Undefined. That is actually a very common answer. It is not correct, but that's okay. This actually will blow up. You'll get a reference error. Which does happen in JavaScript. Okay. So because we do have function scope, and so all the undefined people said, hey, undefined. But this is a right on RHS reference. And if you try to make an RHS reference on a variable that doesn't exist, it blows up. Right? Okay. This is great right up until you forget your var. Right? You have all done this. Everybody's done this because it works still. Which is terrible. It's not okay. So I forgot my var. It's going to now print it out. Right? Okay. The reason for that is because JavaScript is trying to be helpful. Right? So it gets to that string to print. It's got an LHS reference. So what is, so I've used this term. RHS reference and an LHS reference. Anybody heard those before? Right. So RHS reference stands for right hand side. That's all it is. Right? And LHS reference stands for left hand side. So string to print in this case is an LHS reference. It's on the left hand side. And so it goes to the compiler and says, hey, I have an LHS reference for string to print. Does that exist in function scope? But it does not. So it's going to pop it up to the global scope and say, hey, I have an LHS reference for string to print in the global scope. Do you have that? And the global scope will say, no, I do not. However, this is obviously an oversight on the part of the programmer. The programmer completely intended to create this variable. It just, they forgot. So I will do them a favor and I will create it for them. All right. So then they do. And instead of letting you know that now this is a problem, it just moves on with its life. And now you're leaking stuff up to the global scope, which you may or may not care about. You should care about. But you know, there you go. So what do we do? We do use strict. And what you use strict is going to do is tell us, tell the compiler in the runtime, stop it. Do not just gloss over things. Show me my errors. So now you get a reference error. But not down here at the bottom. You get it up there at the top. So now when you run this, it blows up. So strict mode will make your code blow up. Just know this going into it. It will now start throwing errors that you've never seen before. Well, that's what's going to happen. I'll show you one more. Deletes. This is another fun little one. There is a delete keyword. How many knew there was a delete keyword? Yeah. So delete is intended to be used to remove something from an object. Right? So in this case, I've got object A 100, B 200, I delete object dot A. What's going to print out in my console.log? It's just going to print out B 200, right? Everybody agree with that? It's the answer. So yes. Right. Okay, now I'm going to add this line because delete's awesome. It, you know, cleans up my memory. It's very helpful. Awesome. So let's just delete the object. I'm done with the object. What happens? We're far enough into this talk where you should have some idea of the fact that nothing's going to happen, right? Because what's going to happen now, it's going to print out B 200. Because the runtime is going to get to that line, delete object and delete object is not allowed. But that's okay. I mean, it's, you obviously didn't intend to delete the object. It's not allowed. So we're just going to move on. It's not going to do anything. You're not going to get an error. But it's, you know, it's not going to do what you're obviously asking it to do either. Okay. And then you got this. Delete my var. That does nothing to, right? You're going to get everything out. So now I've got code that does nothing. And this happens far more often than you would think. I look at client code all the time and I'm like, I don't know why you've got this line of code here. It literally doesn't do anything. But Stack Overflow told them to do it. So they did it. Okay. So you strict in this case, where is this going to blow up? Is it going to blow up? Who thinks it's going to blow up? Yes. How many of you are getting tired? I'm almost done with the hand raising. It's fine. It's Friday though. It's day three. We got to get some energy. Okay. So we'll do some calisthenics. Right. It's going to blow up right there. It actually is going to tell you, hey, that's not allowed. And that's, to me, that's why you need to be using strict mode, right? Because it's going to tell you when you're doing something you shouldn't be doing. Instead of saying, well, it's okay. Never mind. Does that make sense? Okay. Yeah. Strict mode makes JavaScript give you your errors. Right? JavaScript doesn't like to give you your errors. It'll like still hold on to them and ignore them, sweep them under the rug, forget about it. Strict mode makes you get a done. Okay. Ha. This is my favorite topic, like in all of JavaScripts, because it's the worst topic in all of JavaScripts. Right? I don't even know what this is. Like most of the time. And there's a reason we're talking about this in the context of strict mode, but let's talk about, like, the greater this for just a minute. It's not going to be a long. But, okay, so I've got an object. I've got an object literal that says, greeting, hello, say hi, and a function. Okay? So, I want in my say hi function to print out the greeting. But greeting isn't an object. I mean, it's not like a variable I have access to. It's something on the object. So the only way I have to get access to that greeting variable is by using the this keyword. So I use this to get access back up to my greeting keyword. So I have object A with say hi that says this dot greeting. And when I do object A dot say hi, that's what I get. This is, this should be obvious. Yes, this makes sense. But JavaScript is weird. Because in JavaScript, functions are first class objects. They're not just like definitions of things. They're things. So I can do this. I now have object B. And we have another greeting, sup. Because I don't know, in the states, if you are younger than I am, like you can just sup, right? And so you got to do it with like the little head nod thing. So, but I already have a function. I already have the function say hi written. And I don't want to write it again. So I can just copy it. I can take object dot A's function say hi and move it over to object dot B just because. Now it's going to print out. Is it going to say hello, John? Who says hello, John? Okay, who says sup, John? Right. It's going to be sup, John. Here's why. Because what's on the left side of the dot is what this refers to. So if you're writing JavaScript code and it's something dot something, what's on the left side of that dot is what the, what this refers to. So this, in this case, just use this a lot, is going to be object B. So this dot greeting is going to be sup, right? Does that make sense? Ish. I'm going to make it worse. I can just copy the function. Like I'm just going to copy the function out and then execute the function. There's nothing on the left side of the dot. So in this situation, what does this refer to? The window. It refers to the window because there's nothing on the left side of the dot so I don't have anything there to have it refer to. I don't have a dot binder or any of that stuff and I'm not going to talk about that because that's not the point of this talk. But so it's just going to default to the window. And so that default to by this point in this talk should kind of make you nervous because that's JavaScript saying, well, you, it looks like you want to have an object dot something and I don't have an object to bind this to so I'm just going to give you the window. So this is actually not going to blow up. This is going to work. You get undefined, John. Because this exists but this dot greeting does not. So that's bad, right? Because now here I'm reading from. I could be writing to. I could be writing stuff to the window and I could be overwriting things on the window that I should not be overwriting, right? So what strict mode does, we do use strict in this case too. What strict mode will do for you here is instead of giving you global, instead of giving you the window or the processor depending on where you're at, it gives you nothing. So now I've got undefined and now it'll blow up. Did I say, yeah, you got a reference error now? Okay. And just to kind of touch on this, I'll give you one more little thing. Now I've got an async thing. What's this referred to in my async call? I don't have you strict on. Who knows, right? I love that. So he's like, oh, right. Because that's the problem. But when this, what this does, what an asynchronous call does is exactly this. You're just handing it the function and then it executes it off there somewhere. You don't have object dot function call. You just have the function call. So in callbacks, if you're doing, you know, anything in modern JavaScript development where you're doing a callback, this is not what you think it is. And this is where arrow functions actually come in in ES6 and we'll talk about that here in a second. All right. Know who your functions are. How many of you have seen code like this in JavaScript? Right. We refer to this lovingly as Christmas tree code because, you know, if you get rid of the bottom part of it, that's what it looks like, right? It's like a Christmas tree. And so this is bad and terrible. And this is like, I have one line of code in each of these functions and it's already bad. If you get five or six or seven lines of code in each of these functions, it becomes so much worse and it's almost impossible to figure out. But nobody ever said that you have to use anonymous functions and callbacks. But we do for some reason because it's easy and convenient. And so when you start to get code like this, stop it and do this. Is this easier to read than that? I love that. It's like, well, right, because this only had one line of code in each function. I can't fit like real stuff on a slide. Okay, right. This would be much easier to read because all you have to do is read this bottom line, async method. I'm opening a database connection and then find and validate user. I just give the function a name and I'm done. Here, you have to actually say, okay, I'm finding a user, validating user that I'm going to do stuff. Here, I can actually name my function something meaningful so when the next person comes along, I can do it. And then they can pop up and they can do find other stuff. Okay. Promises. Who use, I got a woohoo on promises. I like it. Okay. Promises are awesome. And actually, so ES6, they are native. You can use them natively. You don't have to wait until ES6. There's promise libraries out there. If you're Angular, there's Q, anything, there's promise.js. Promises are awesome because now I can take that code and I can, so I just, I can wrap a promise. So a promise, if you don't know, just kind of looks like this. I'm returning a thing called a promise that has fulfill and reject and then I do my stuff and I just call fulfill. Now my code looks like this. So I've got an async method, then I'm going to find a user, then I'm going to validate the user, then I'm going to do some stuff. This is much more maintainable and much more readable than anonymous functions everywhere, right? We want to avoid the anonymous functions as much as we possibly can. All right. So ES6, ECMAScript 2015, whatever you want to call it. I call it ES6 because I don't care. ES6 is awesome. How many of you are writing ES6 code today? Okay. Fantastic. ES6, the beauty of ES6, and here's all I'm going to talk about is ES6 has some things in it that you can start using almost like right away. You can start using them now. You don't have to change everything. You can say, hey, I mentioned arrow functions earlier for the callback stuff. I can start using arrow functions and that's it. That's all. I'm done. I can start using promises and that's it. That's all I have to do. I don't have to completely convert my entire mentality about JavaScript development. I can just kind of pull some stuff in now and then. That was when I did that, like everything shook. Okay. I'm going to show you this. This is like the worst website. It's the best website in the world, but like the worst website in the world. So if you Google, it's Kenga X, but if you Google ES6 compatibility, this website is like your Bible, all right? Because this says what you can use and what you can't use and how everything works. How many of you have been to this site before? Not nearly enough of you. Okay. So that's good. So on this website, you get all of the functions that are associated with and I can, I'll make this bigger now because I just want, okay, right? Not that big. Right. So you can say, hey, I've got, let's just do default function parameters and I can, hey, now when I create a function, a equals one, b equals two, if you hover over these little things, it's the test that runs for this first column. So this first column is your current browser. And so my current browser, look at the support. Everything's green. So in Chrome, the version of Chrome I'm running, ES6 is pretty much completely supported. And I can hover over each of these things. I can come down here and say, hey, the let keyword, which gives me a block scope. I can hover over that and say, here's how that works. Here's what I do. This website is phenomenal. You can, here's IE, here's Firefox, here's Safari, notice, not as awesome. Node, node six, 93%. So for us, node developers, that's kind of awesome. And then you've got Babel. Not to like go into ES6 and like take us this way for a minute. I just want to show you the site. Look at this and start digging into the ES6 stuff because you should be using, if you can, if you don't have to support like IE 10, which you might have to, if you do start looking at things like Babel, but start using these things because they start to clean up a lot of problems. Okay. Don't reinvent the wheel. I got to remember to move my mouse up. There's a lot of stuff out there. There are a lot of packages out there that you should be using or could be using to make your life a lot easier. So earlier, now the buzz is frameworks, right? I said who uses React, who uses Angular? Aurelia? One. Aurelia is very cool. Rob's actually here at the conference, which is very cool. So frameworks are built so that you don't have to deal with a lot of stuff, right? I can crank out something in Angular with callbacks and all of these things a lot faster using Angular than if I had to code all that stuff on my own. Same thing with React, right? There are a lot of other packages. So moment, who uses moment to deal with dates and times? Yes. That's exactly right. Don't like create your own date, time formatting stuff, use things like moment. Bootstrap, foundation, material? A couple? Awesome. So, but right. Pick a UI framework and use it. So the last one, underscore or low dash. Yeah. Underscore or low dash are awesome. Now here's the thing. Those of you who don't know, there is, so basically what, where's my mouse? It's just a collection of functions and I'll make this bigger. And I do that. Right. So it's just a bunch of functions. So there's one is null. So is null underscore dot is null tells me whether or not something's null, right? Instead of that whole type of and all that, I can just like, I can do some checks. There are a ton of functions out here. Here's the thing and I'm going to say this on tape and I probably shouldn't. If you're only going to use one, just kind of like go look at GitHub and see what they did and kind of just don't pull in the whole package for one function. But if you're going to use several functions, this is the right way to do it. Nine times out of ten. Pull in underscore. Pull in low dash. Actually, I didn't know. Who uses low dash? I'm curious. This is curiosity. So some who uses underscore less. Okay. So underscore is, I think, the more popular one at this particular moment in time. So all the hipster kids are doing low dash is what I take away from that. Okay. However, and we're going to kind of end on this thought. Don't overdo it. I just pointed out a whole bit. Some packages, I said you should be using packages. Packages are awesome. Pull packages in. Don't reinvent the wheel. But oh my word, please. What's this? Anybody know? This is the, like, ten lines of code that broke the internet. So this is left pad. Okay. So here's what this function, okay, get this. This function takes a string and pads to the left spaces or zeros. Like, that's what it does. And it was an NPM package. NPM install left pad. So that you can download a function that's going to pad your stuff to the left. But long story short, lots of people use this function because apparently that's something that's very important. Lots of people like Babel. Babel used left pad. Well, there was some issues. I'm not going to go into the drama of it, but there were some issues that resulted in left pad being removed from NPM. When left pad was removed from NPM, the internet broke, right? Because Babel quit working. There were several other large packages that quit working. So everybody's bills broke because nobody could find left pad anymore. NPM has fixed the issue. So you can no longer un-publish NPM packages, which is cool. But the lesson, the overarching lesson to me is still the same. Why is this an NPM package that we're pulling in at all? Right? Why do I have to have 700 NPM packages in my project? Or Bower packages? Or pick your package management of choice? So just kind of be aware, right? Don't reinvent the wheel. Use things like load-ash. Use things like underscore. But don't like take it to the ridiculous extreme where you've got 7,000 NPM packages because you don't want to write it in its null function on your own, right? Okay. So that's it. I think we're, yeah, we've got like just a couple minutes if you want to pose some questions. But there you go. Questions? Yeah. Do you use and what do you think about typescript? Typescript. I cannot get through a JavaScript talk without typescript coming up, which is totally cool. Okay. So here's my thought. This is my personal opinion on typescript. I don't use it. And I don't use it because, and you talk to me next year, I'll probably be using it. I mean, I'm not like hardcore about it because I don't want another layer on top, right? I can get some of that using, you know, strict mode and things like that and running tests. But I actually kind of like JavaScript the way it is. I don't see as much hype about the static typing. But I don't know. That's just me. Angular 2 is pushing it very heavily. So it's not a bad thing to know. It's not a bad thing to pick up. But I don't want another layer of abstraction on top of what I'm already doing, if that makes sense. Because typescript compiles down to JavaScript. So I can just write JavaScript. Now you can say,.NET compiles down to bytecode. I'm not going to write in bytecode, right? I get that. But typescript is very cool. I know a lot of people who do it. And especially if you're doing Angular 2 right now, you should probably be doing typescript. I've been told that it kind of helps you compare to yourself with something else. It does. Right. I mean, if all the stuff we've talked about is a struggle, typescript prevents a lot of that stuff. But now you kind of know it. So it won't help you as much. All right. Questions? Anything else? Yep. If I don't use typescript. Okay. That's a fair question. So because I don't have static typing, how do I know whether my function is getting like the right thing? Is that essentially what that comes down to? And I mean, you've got to like check it. I mean, yes, I know typescript has a lot of value. No, I mean, basically it's, if I'm expecting a number, you have to check to make sure it's a number. Which I wouldn't have to do if I was doing typescript. All right. Fair enough. No, but it's, but again, that's back to if you, typescript's not bad. I mean, I know a lot of people who do it is great. And if that's, if you're comfortable in typescript, totally do it. If you're, if you're not, don't. I mean, that's kind of the whole point of this talk is find the thing where you're comfortable. Don't fight everything. Right. Find the thing that makes you comfortable and go with that. So, yes, but that is a, that's the struggle. All right. No, I'm out. So thank you guys very much. Appreciate you coming out. And have an awesome last day. Cooks给我!
Javascript is easy to do very badly, but also fantastic when done well. In this session, Jon will walk through some JavaScript best practices to make you a more productive developer. From linting your code, to adhering to common patterns, Jon will give you practical tips to help prevent some common JavaScript troubles.
10.5446/51722 (DOI)
Good morning. How cool was that press play to untape, what was it called? Press play on tape last night? Holy crap. It made me feel really old because I knew all the games. Thanks for coming. I just want to sort of start with a little bit of disclaimer. Did anyone yesterday go to Renee's talk? Just to show hands please. It's a few. Alright, so there was a HoloLens talk yesterday and Renee actually has a HoloLens and I don't. See, no. No, it's good for you because I'm guessing you don't have a HoloLens either. Does anyone have a HoloLens? Jimmy, hands down. No, and that's the whole idea, right? There are really none of them around. There are, I think, so Jimmy has one. I'm pointing it out. No, Jimmy has access to one and it's only because he gave up his first born, right? And really hard to get. It's almost impossible. So therefore, I'm here to tell you how you can develop for HoloLens anyway. And that's why it's cool. So I don't need a HoloLens. So if you thought I should have a HoloLens, you can give me one. No? Okay. So why am I here? My name is Lars and I am actually Danish, but I have lived in the land of fruit and nuts. No, not California. Australia for a very long time, which is kind of why my accent is like this. So I'm drink the Kool-Aid. I talk a lot about Microsoft. Windows platform, MVP categories where HoloLens belongs for the moment. So that's kind of cool because I fit in there. I do some PluralSight courses, which have been a bit of a mix. Kids' Cause, some Windows phone stuff. Yes, I am the guy that uses Windows phone in Australia. And I've been Troy's test subject on a couple as well. So if you don't know what PluralSight is or if you would like to try, have some free passes. So feel free to grab one after the talk. I'll write a bit about different things here. Also about HoloLens, although I haven't written much because it's so new. And I just want to mention this. Did you know that NDC is going to Sydney? Yeah? So yes, says the Australian. So if you want to have another go and a trip to Australia, because we have really, really nice beaches. We have beautiful fauna and we welcome everybody equally. We really like you to come and obviously someone else likes would like that too. This is generally the perception of Australia. It is not true. We are quite a civilized bunch as long as you provide us with beer. Now, I have this experience from last year at NDC and I had a Norwegian audience. That's a thing. Did you know that? It's like we use in the world. It's like, did you get Norwegian audience? Oh no. Please don't be like this. Please don't be these guys. The best way to learn is to do this. Ask questions, be interactive. I don't mind. I thrive on questions. And throughout the talk, if you follow this hashtag, I'm going to tweet whatever is relevant in the talk, hopefully at the same time as I talk about it. So that's just a bit of context if you miss out on a thing or a link or a video or whatever it might be. So enough about me. Let's talk a bit about what we do every day. So the current state of software is that we develop things for a screen. Is that right? How many here are software developers? Okay, everybody else? Anyone here is a manager or something? No? Good. Out. No. This is for everyone. But the thing is we all live in this software world and we more so than everybody else. We have a duty to provide good software to users. The problem is though that we are limited by what we can display it on. So if you're a mobile developer, you develop for a flat screen. It's in someone's hand. Granted, you have lots of accelerometers and GPS and all sorts of sensors that you can use, but you are limited to that screen. It's the same if you're a desktop software developer. You can only develop for whatever people can show on their screen. And it goes on. If you do games, does anyone here do game development, by the way? Do we have any game developers? It's like one in a really, really long arm. He made Donkey Kong. Because that's an advantage in holographic programming. No question, especially because of the 3D aspect. But with games, you still, you sit down. Unless you are using a Kinect or a Wii, which I haven't seen anyone do without injury, then you sit down, right? And you limited to the screen again. And the same collaboration. And this is a really cool point about HoloLens, collaboration. But you normally just have, I don't know if you sit in point at a screen. That seems a bit odd. But there are very, very physical limitations to what you can do. Yeah, come on in, guys. If you sit up the front, I'll point now. I won't point you out. But the gist of all this is that when you've been doing it for 20 plus years, which is why I knew all the games yesterday, oh, God. It kind of, you kind of become a little bit of a boring duck. It all becomes a little bit the same, right? And you just sort of go, another one. And you go, oh, this one, this thing, I can use GPS. Yes. And it's sort of you try to start focusing on the little things that make it worthwhile, right? So I'm in the same boat. I do mainly web development. I'd love to do more Windows apps, but no one gives a shit. Unfortunately. So that's sort of the current state. And I guess I'm here to try and tell you that it can be really, really different. And you can start right now, even though we don't have these beautiful devices. So let's just, I know, so just to show hands again, because we like like double of book before, if anyone were at Renee's talk yesterday, please just show it. Yeah, there's a few there. So, and that's okay. I'm going to try and do a different take on it. There will be some mobile app, but I don't have a HoloLens. So I have to do it differently, right? Okay. Just so you know what you're in for. So if you don't like that out, no, you can say. The first thing is virtual reality. So we have this concept of virtual reality, which has been around for God knows how long I was, I think it was eight or nine when I did virtual reality for the first time. And it was in, I don't know, in this contraption thing, and you're holding a, like a trigger that would make you move. And then you're supposed to shoot the other guy. And it was all eight bit and it was all, it was kind of like the demo yesterday. It was all squares and spheres. And that was it, right? So it's been around for a long time. But it is a complete virtual world. So everything you see is replaced. Everything you see in this world is made by someone else. It has very limited physical movement, obviously, because if I had an Oculus Rift on now, I would walk off the stage, right? So you can't really move around because you can't see where you're going. But it's great for prototyping. If you're trying to show someone what a new building is going to be like, there's nothing better than giving them on a virtual reality and they can look inside the building, right? So there are really, really good use cases for it. Next thing is augmented reality. So you probably have a smartphone. Yes, anyone doesn't have a smartphone? Because I want to see it. Now, it's so prevalent, right? We all have these devices in our pockets, which are mini computers. Augmented is kind of like a tiny bit of what HoloLens can do, I guess is the best way of explaining it. It's usually triggered by some sort of object. And your phone will pick it up with a camera and it'll show something, it'll augment the reality. Usually what I've seen is brochure here. So you have, I want to buy a boat. I point my smartphone app at this boat and the boat comes alive on the screen, right? It comes 3D. Very cool. It's really powerful. And then because HoloLens didn't really fit into any of these sort of digital realities, as I like to call them, they had to come up with their own. So it's called mixed reality. And I'm sure this may change in the future as people find out more about what it can do. But right now the term is mixed reality. And these are all digital realities, just different ways that we are interacting with a digital reality. So the HoloLens, the whole idea is you have seamless integration of real and digital. If you went to Renée's talk, you saw this. You saw that it'll interact with whatever physical devices you have next to you. The whole idea is to have natural interactions, right? So whatever you do normally, it's still getting there. I wouldn't say it's still normal to do this all the time. But we're getting there. It's an interaction that you can do. It's a physical movement that you could do in real life. And then the whole idea is you can use any room, any surface, anywhere. So normally when you look at virtual reality, and Troy kind of touched on this in his keynote, you get this feeling, right? Like this is usually the reaction people get. It's like, holy shit! Because it replaces everything. And you don't have your in-area balance necessarily. You don't know what's coming. You can't escape it other than just scream, right? And it does replace your entire physical reality to something that is much more real, I guess. I love that one. So that's what Sonic looks like in real life. But HoloLens to me is why we're here. And it is just absolutely pure magic. I tried it on. I have used one. And it does blow your mind. Even though you've read everything about it, and you've seen all the videos, it's just that whole change of what your world is, is quite impressive. One important thing that I have come across though, the more I read about this, the more I use the tooling, the more I try and come up with ideas that are really going to blow people's mind, as we always say. We need to understand what is the difference between HoloLens app and a virtual reality or augmented reality app? And it's an important difference because I don't want to take any of Renee's thunder away because the flight, the Holo was a Holo flight yesterday, was absolutely impressive, right? But to me it wasn't a HoloLens app. And I might be a bastard for saying that, but it just wasn't. Because it wasn't interacting with the real world. The planes, if they could crash maybe into the wall or whatever, it might be, but that would just be morbid. It wasn't a proper HoloLens app. It was impressive and it was really, really polished and well done. No question about it. But something like another test or not test, but demo app that Microsoft has is this game. And in this game, which I forget what was called, what was it called? Fragments, which is fragments of memory for this boy and trying to solve a mystery. What happens to the boy? And you look around the room and things get replaced depending on what room you're in. And people will come into your reality, they'll sit down on your couch and talk to you. Like that to me is a holographic app. It's HoloLens app. It changes what your physical world looks like. Important distinction at least to me. Anyway, okay. Let's have a look at some of the hardware specs. I love this picture because all we've seen is on the right. We've all we've seen is that HoloLens, right? And it looks pretty good. Looks a little bit like you're from X-Men, but it looks pretty good. But it came from the other one. It came from that prototype. And that's the earliest prototype I could find any pictures of. And the thing he's got on his head, that black bit is actually a Kinect. So he's got a Kinect on his face along with all the other gadgets. So just bear in mind, we've come a really long way. I'm not sure how long it's taken. I think it's about five years so far, five, six years project. So it's not simple. So the specs for this. They are important as much as we like building software. I'm not a hardware guy, really, as long as my laptop works and it doesn't hold up. I'm pretty good. But it's important to remember what is inside this thing. So that's why I say hardware and then I'm going to say Windows 10. It's not hardware. But it's important to know that Windows 10 runs on this device. And as Renee touched on yesterday as well, UWP is how you develop apps for this. So you use the universal Windows platform. And you can use, if you're a masochist, C++. I'm not man enough to use C++. So I use Cshare. Just because it works everywhere and it does a lot of the heavy lifting for me. But there are scenarios where you have to use C++ just because of the performance. So I'll just walk over here because I like you guys too. It's an Intel processor, so it's 32 bits. And people are, oh, it's not 64. I don't think it matters. Often 64 just means that it takes a little bit longer to process because the memory space is bigger and you need a more powerful processor. There's lots of drawbacks as well. So I'm not too worried about that. This is cool though. HPU. Anyone heard of HPU before? No? A holographic processing unit. Yeah, now we're talking X-Ment, right? How cool is that? But it is the thing that makes holograms real. It is the thing that calculates all these millions of polygons that fast. And I don't know, there hasn't been any specs officially announced about it, but rumor says it's about a terabyte a second of data that it processes. So I, you know, it's a pretty serious thing. Two gigs of RAM, which doesn't seem like a lot. I thought it would be like with today's RAM, we're like 200 or something. But two gigs of RAM, 64 gigabytes of storage is where you install your apps. So it still is a fully self-contained computing device, right? Has four mics. So the mics above your head, essentially, if you saw on the device, and it's really good. They're really, really good. I'm very impressed with the speech recognition in Katana. It's Katana using it, but in the HoloLens. And then four speakers, which sits around you, but I'll get back to that later. And then Bluetooth, Wi-Fi as well. But the one thing that is the most important thing, take away all this googly-gar here, is the fact that this thing is untethered. Like, there is no wires off it. It is a completely self-contained unit. And that to me is, it blows everything away. Because you can walk, you can put a hologram here on the wall. So that's in your living room. And that might be, you know, Netflix streaming or something. And you say to it, Netflix, stop. And it'll stop. And then you walk in your kitchen, you make a cup of coffee, you come back, and it's exactly where you left it. Like, you don't have to remove the whole, and you can take it off and on again if you want. But things are exactly where you left them, if you're a good developer. Because it's not that easy. There are lots and lots of abstractions and tools to let you do it. But it can be a little bit, let's say daunting to read all the documentations on all the various things that make a hologram stick. But I won't go into that, because it's boring. This is normally what you see. Have anyone here followed HoloLens sort of marketing blurb and tweets and Facebook stuff and seen the videos and all that sort of stuff? Yeah? Yeah, a few nods? Please don't be Norwegian. Come on. So you've seen this. And this is normally what you get. You say, oh, this guy's staying there. And look, he's got the ocean and it must be Australia because the shark's about to eat someone. And this is actually not how it's going to look. Because the feel of you and this thing is very, very narrow. It's more like this. So it is like a box in front of you, right? And that's why we're talking about sound later, because sound becomes really important. You can't see much. And that's why every single demo you see, they kind of look like this, because they can't see anything, right? They're trying to look, even the whole hologram can't fit inside that field of view sometimes. And it takes a little while to get used to, maybe an hour or two. But it is a limitation for sure. It is something that you need to be aware of, because people can't necessarily see everything they do. And the other thing is you put a hologram over there, and then you saw Renee yesterday, he just fired off 100 of these meatballs. If you had put 100 things different, how are you going to remember where they are? Like you have no idea, right? So those that as well, you can't see necessarily, you forget where you put things. All right. Just to blow your mind a little bit, let's have a look at this video. It's from E3, if I can get over here. Which is a gaming conference. Oops, I'm not putting sound on. Sound please. Hello. Is there sound on? I'm not sending any sound. Oh no. All right. I'm going to do that again. There are more guards. Still nothing. Oh, that's really disappointing. That worked about five minutes ago. Hang on. No. It says it's plugged in. I got a notification. Nothing. All right. Bear with me guys. Because it's important. I do need sound later. So let me just get this sorted. No. Yeah, that's what I did. That one. Thank you. See, I'm not a hardware guy. Thank you. Okay. Right, Clay. I really don't know what I'm doing here. Yay. Uh oh. Problem. All right. Let's try again. I didn't enable the other one. I'm not going to do it again. Really? All right. That's disabled. Okay. Touching. Welcome to my talk on hardware configuration 101. All right. Let's keep going. So this is from E3 and this was one of the earlier videos or demos that Microsoft did on stage. It wasognin e3a, since on your Minecraft world. It's awesome to play with the controller, but could we show them something new, Sax? Sure. Let's take our experience off the wall and then put it on the table over here. Create World. I see Lydia way up there. As I run around and play, Sax can easily navigate and manipulate the world using his voice and his hands. He can walk around the hologram, pan around for different viewpoints, and even look inside. How cool is that? Holy crap. And that's why they use Minecraft, right? Because we all know it, and we all know it as a thing on a screen. And then suddenly they make it 3D in real life. So this was an early example. Things have moved on since then. But I still like this because it really showcases what is possible in a way that we can relate to. So let's have a look now. Is there any questions, by the way? I'm going to keep asking. OK, building blocks of how do you build a HoloLens app? Because there are elements that are really important. There's four in particular that you need to pay attention to. So we've got Gaze. Gaze is what you're looking at. Gaze is essentially how you select things as with a mouse, with a mouse. So it's your mouse cursor. And it's not your eyes. So you don't have to program for people looking. It's the center of the HoloLens as a ray. So it's usually called Raycast. And that's because the object that you use in programming, as we'll see in a second, is called Raycast. And the Gaze is your mouse pointer for whatever object you want to interact with. So it's important. It's important that you know what people are wanting to interact with. So let's have a look. Just quickly. So I've got, here's something I prepared earlier. Can you all see that? All right? Yeah? I know it's a little bit tricky in this room with the middle. I mean, I feel like I'm in a tennis match, right? I'm doing this all the time. But you guys just try and see if you can see the screen, OK? So all HoloLens objects are built in Unity. Actually, we can start in Unity just for good measure. So you set up a scene, which is essentially your app. So an app has at least one scene. And I have some 3D objects here. So this is my cursor here. So if I double click on that, you get lots of information about it. And if you don't know Unity, don't be scared. I had, like six months ago, I had no idea what it is meant. No clue. It is not that scary. Most of these things, you never have to touch, or have to touch once. But you set everything up in Unity. In this case, we have this cursor, which is just a 3D object of a ring. And that ring, we want that to be where people are looking, right? We want that to be the indication that you're looking at something and where is this placed. So the way you do this is you create scripts. So down here, I've got folders. And I have some scripts in these folders. And they are just C-sharp scripts. If you notice, C-sharp, it really is as simple as that. We create a script by going through all the far new script, blah, blah, blah, blah. And then we drag and drop that script onto the cursor. In this case, it's the world cursor, right? I just want to show you the code, because otherwise we can spend really long time just going through all this. Every single script that wants that you get attached to objects have a start and an update. Start is basically the way you set things up that runs once on initialization of the app. An update runs every frame. So there's 60 frames a second on HoloLens, without exception. Everything has 60 frames a second, because that's the way you make people not throw up. So it's really important that you have really, really good frame rate. And there's lots of tooling to help you do that. But all this does is that we have a mesh renderer. Actually, that's the mesh for, yep. And in here, we have the transform. So that's where the camera, or HoloLens, is facing, the position. So that's your head position, whether you do this. And then where forward. So that's where you're looking. What's the raycast, right? And then you use this physics.raycast, which is a boolean. Give it the position of where you're looking. And if that's true, then you can use this hit info, which is a raycast hit. So that's what am I looking at. This turns false if you're not looking at anything, or if the little ring doesn't hit anything, right? Which is why we have enabled false. Pretty simple. If we do hit something, we enabled the cursor, the mesh renderer. And then we set the hit info. So the hit info of the raycast and the point of that is what we set the transform position to be. So that's where we're looking. And then we can rotate it. So the position of the mesh, so this is complicated. The position of the mesh compared to where we're looking, right? So if we're looking, as we've seen in a minute, at a ball, we'll put that on the ball. And then we have the rotation here, which we're saying, well, let's make it hug the surface. So it looks a bit more real. So you don't just have this flat ring that sort of sits not in real life. So the way that looks is, if I just go in here, we'll just debug it without debugging. So this is the emulator, right? And that's a really, really cool tool. It's actually incredibly well done for a first-gen device. The emulator does everything the HoloLens does, more or less, obviously without the physical experience of it. But it gives you 3D space. It gives you all of the, I'll just do this. Here we go. Reactivate it. Yeah. Has all the tooling goes with it so we can see where everything is. Let's move it a little bit more. So we have all the coordinates of everything that you would be looking at. So this is your head coordinates. Why is this not starting? Come on. So that's the start menu. I'll just press Start. And I'm just going to go over here. And I'm going to select my origami. So the tap is what I was doing. That's just the Enter key. So it's as simple as that. You just have Enter, and you have Start. And that's pretty much it. So here's our little app. This is our, and I'm using the arrow keys and the gamer keys, WASD. And I'm going to move around this. And as you can see, when I go over the surface here, hey, let's just zoom in a bit. I'll move in. That's our ring, right? So we're hugging the surface as we go around. And that's it. That's all you do. Well, obviously, you can have this cursor be anything. You can be your company logo. It can be your finger. It could be whatever you want it to be, right? So that's gaze. It is really as simple as that. OK. The next thing, which is probably a little bit more interesting, is, what's that? Here we go. Gestures, right? So we want to be able to interact with these elements. I love that one. We want to be able to interact with the objects that we're looking at, and that's where we use gestures. The most common one of the, hello? Oh, you're still here. Now, the most common one of these is the tap. And there's what's called a gesture field of view, or gesture frame. And that's essentially where the whole lens is looking for your finger. So you can have actions based on whether it can see your finger or not. But essentially, it's all this, this. You just do this all the time. There's a whole lot of this. And that's mouse clicks. And it is meant for you to do this. So if you move your head, it can't see your finger, right? So you start becoming a robot because you've got to sort of move your hand with everything. It's a bit mechanical, but it works really well. It does pick it up really, really well. OK. So that's the tap gesture. It also comes as tap and hold, or tap and hold and move in 3D space so you can move things around and manipulate them. And then you have the bloom, which is just that. That's just start menu. That will always work. You can always get the start menu up. And again, see, there's again more marketing shots. No field of view displayed at all. So let's just try and make a few gestures, shall we? OK. So the way that you do gestures is you have a gaze gesture manager. So it's built in the SDK. So all of these tooling, by the way, are free. And if you follow the last Twitter handle, I will eventually tweet a link to where you can get all the tooling and how to set it up. And I wrote an article yesterday on the whole introduction to building HoloLens apps. So we have a game as a gesture recognizer, which we actually recognize the gestures. And we then, on start, which is our initialization, we attach these events, event handlers. So tapped event comes for free. We're going to do that a lot. And all that does is that if it detects a tapped event, then we say, well, if we're actually looking at something, then we send it an on select action. And then we just start capturing gestures. And that shall start. And then every frame, we do a raycast again. And we set the focused object. So every frame, we get the raycast. And we look at, OK, are you looking at something? No. Yes. No. Bull, bull, bull, bull, bull, bull. Hold on. 60 times a second. Which means that when we actually look at something, we tap, it knows whether we focused on something or not. And that is really as simple as that. And then we cancel gestures and start capturing them. Bit of optimization there. So we get our right frame rate. And all that does, so again, we up here. I'll just click on my emulator. So I can move around here. So when I, oh sorry, I forgot to show you what actually happens. So here, on select, we click on the right frame rate. On select, that's that method here. So that's attached to the object itself. The thing that I can select, the thing that I tap on, we'll get this. So if you tap on something that doesn't have the on select, well, nothing happens. It's just an event. Does this look familiar? C-sharp developers? This is something you could do, right? Well, you can read this. It's not rocket science. It really is quite straightforward. Because initially, when I thought a HoloLens app, oh, God, it's going to be tricky because there's maths and there's 3D space in there. But no, it is initially very simple like this. As Rene said, you do get into situations sometimes that are not. All this does is it says, if you don't have a rigid body, that essentially just means, do I have physics? So does the 3D object have physics? If not, you give it physics. So what happens is that when you tap on it, so I'm just going to hit the Enter key here, falls down. Because I'm in the room. Actually, I'll show the room. And they're suspended in mid-air because they don't have physics. They're like magic, right? One thing I just want to show you now, we're looking at the emulator. So there's all these things here. There's all the body was that moved. If I tilt the head, so if I move here, so Q and E will tilt the head. So you see my head can roll. Because you could do this when you have HoloLens on. So you get that information as well. You can select what room you're in. So their room's preloaded. If you have a HoloLens, you can record your own room and put it into the emulator. Makes it even more useful. But all this is that has this device portal, which is a website. So this is now connected to the emulator. And if you had a physical device, you would get the same information just for the physical device. And the really cool part of this is this 3D view. So this is my plane. So I just used the mouse key here to go in and look. I mean, obviously, this doesn't look that impressive, because it's just a bunch of squares. If I update my surface here, it builds the room for me that I'm in. So I can fully completely see where this room is. Just a little bit like that. Move it over here. And then if I open the emulator, as I move around, see it moves over there as well. So obviously, I would have this on a different screen. I normally work with three screens, just because there are so many tools that you want to see at the same time. So you get a fully, I don't want to say immersive, but a full experience of what you would experience in a HoloLens. And there's many more tools in this, especially around performance, which is a complete talk on its own, because it's a reasonably complex topic if you want to have lots of holograms, especially at the same time, even if they need to interact as well. So still no questions. Is this? Yes? Yes? I have a question concerning, I think this is the coolest thing that's happened in the last couple of years. So do you know anything about the plans of rolling out consumer versions? Do I know anything about rolling out consumer versions? Is anyone here from Microsoft? No, I don't know a thing. I have no idea. No, I actually don't know. There's been nothing I've heard at all. Right now it's dev kits. And the one I tried is very much a dev kit. It's like a prototype, essentially. They're quite fragile. They do work fully as you see in the videos. There is no magic there in terms of Photoshopping. But it is, the field of view and the way that it feels is still very much a prototype. But consumer versions, I don't know. I can guess. Two years, three years? I don't know. I think it's probably about that far. But I hope it won't be $3,000 for one. But no, I don't. I don't know when the consumer version would be released. Yeah? You said field of view. Do you think there is any plan of expanding the field of view? So is there any plans of expanding the field of view? I really hope so. That's the only I can say. There was a job at someone posted that was announced from Microsoft of an Optics specialist or something for the something devices unit or something. I mean, hopefully that's a good thing. But I would have thought so. I think it has to do with processing power. So once they get their hardware better, they might be able to do more. But I actually don't know. Again, I really hope so. Because it would make it a lot more immersive, for sure. OK. Let's go back and look at the thing that I really hope is going to work. Bit temperamental sometimes. So voice is quite important because you can't keep clicking on things, right? You can't keep doing this because it gets boring and you get tired. So you can talk to the device and it'll understand everything you say. I don't know what I'm saying. I'm just saying it's not a big deal. I mean, but in English, it almost doesn't matter what accent you have. I think if you have really like strong voice, you can't really hear it. It almost doesn't matter what accent you have. I think if you have really strong Scottish eyes or something, it might be a problem. But it actually does work really well. OK. I'll get rid of that guy for you. Sorry. There's a command which is select. So you can always look at something and say select. If it is selectable, something will happen. And that says you can't override that. It's a reserved word. Select will always work. And of course, Katana works, right? Yeah, Katana. Because it's going to bleep everything over here. But you can say, you know, Katana. And she'll come up and she'll start working on your device as well. So yeah, it's internet connected. And then there's custom commands, which is by far the most useful thing. And there's lots and lots of theory around this that I won't go into as such. But just bear in mind that speech design is by far the hardest part of this. Getting someone to do something when you say something is not hard. But having a whole catalog of commands that you want your users to use is by far the hardest. Because you can't have words that are too similar. You can't have sentences that are too long because people won't remember. You can't have... So what is that? Yeah, they have to be non-destructive, right? So you can't have things that override other things so that if you have words that are similar or they don't override each other, you shouldn't have more than two syllables in words because then it starts getting hard to recognize it or hard to. So there's all these guidelines around how you design your speech or your voice commands. And that's the hard part. So I want to show you the easy part, which is really, really quite simple. And I was surprised because this is... I've done a fair bit of Katana integration. Has anyone here done Windows apps? Is there any Windows developers? There's a few. Hello! Have you used Katana? Has anyone here used Katana on an Android, on an iOS, on a Windows phone, whatever it might be? Oh, come on. It's like the good version of Zerry. Come on. No, I know what you're saying because it's a thing we're not used to, it's an unnatural thing to go around and go, hey, Katana, tell me a joke. It gets boring quick. By the way, Katana's told over half a billion jokes now. Someone does it. Oh, no, no. Stop, stop, sorry. Told you she was going to wake up. All right, I can show you, but first I need to quit the current app. Is that okay? No! All right. I'm not even sure what she was going to quit. Wow. Calm down. So let's have a look at speech. And it is really simple. You have a keyword recognizer. Actually, I'll go down the bottom here first. Oh, that's an error. So you have a keyword recognizer and then you add keywords to essentially an array. And the keywords are text strings. So we're back to magic strings again. But they are just text strings, right? And that means you can do whatever you want. So we have two here. We have reset world and we have drop sphere. So they're different enough, right? They're short, they're different. All right, fair enough. And then with these keywords, we attach them in here to our keyword recognizer. And then whenever it recognizes a keyword, it fires this phrase recognized. And we get all this for free. It's all built for us. It's all part of the SDK. And then we have our event that says on phrase recognized. And we invoke whatever that lambda function that we gave it up here is. So there's other ways you can do it as well, obviously. But that's the gist of it. And by the way, all this code here, I've blatantly stolen from the Microsoft tutorial because I want you to do it. I want you to go out and do this. If you have any interest in holographic development, it is that simple. And again, follow the Twitter handle and you'll get where all the links are. And so what this does is that reset world sends an on reset event and drop sphere sends an on drop event. This is a broadcast message. So that's actually for every single thing, not just what you're looking at, whereas the drop sphere is only for what you're focused on, right? So that's a slight difference here. I'll go back to our app here. Let's see if I can find. That's the problem when you're, oh, there we go. So there's my other ball here. I'll need to get my, there we go. Did you remember that I dropped it to the floor, that right one? Now it's back again. It's because I said the keyword and it was still listening. So if I drop this, whee, and then I go and I do the same with the other one. Actually, no. I'm not sure if I could write. We just made a voice command. So I can say drop sphere, eee, and it falls down, right? That's how well the emulator works because my pronunciation is okay, but I'm certainly not British, right? It's not super clear. I picked up the hang on, eee, eee, right. The Australian part of the things, right? And then the other thing we could do obviously is that we could say reset world, eee, and they come back. And they have so much fun with this. Like you can keep making up, you know, keywords and do things and make them dance or whatever you want to do, right? It's actually implementing the keyword part of it is not hard, which is my point. Everyone here can do it. You can. It's really not hard. So let's have a look then at the last of the four pillars or building blocks that you need to know. So we've gone through gays, gestures, and now speech. So the last one, bleh, is sound, right? Sound is very important because as I said, you might have people put holograms here and everywhere. And you need to know, or you need to let your users know where these are if they can't find them. So having sound, especially spatial sound like this, what HoloLens does, is crucial. And to be honest, I haven't mastered this. It's really hard to get sounds that are appropriate. You don't have to worry about where the spatial sound bit of it because the SDK does all that for you. But this thing, the SDK does what's called the head-related transfer function, which is very clever, I'm sure. But all that means is that it mimics what your ears would normally hear. So a sound, the speakers above your ears on the HoloLens will put out sound that is slightly adjusted to each based on where the direction of the sound is. So if I heard something over there, it would reach this ear later than that ear. And that's the simulation of, I guess, what HoloLens does to make you turn your head and say, oh, it's over there. Very simple concept, but really, really hard to implement. But we don't have to, which is good. So I'll just see if I can get this demo to work as well. So does that make sense? Those four things is what you need to know, right? Those four things is everything you need to know to get started. I'm not saying it's everything you need to know ever. But it is to build relatively simple apps for HoloLens and get you started, that's all you need to know. So, sphere sounds. So these are assets in Unity as well. So, it's a bit small anyway. Down here, oops, sorry. Here we have an ambient sound, so that's a soundbite. So that's part of our project, right? It's an asset we have in this project. I don't want to go through how you build and all that because we already did that yesterday. Building and deploying your HoloLens app is a little bit tedious because you've got to go into Unity, export it to Visual Studio, open it to Visual Studio, start up, run it on the emulator, wait for it, and so on. Rene was a bit clever about it. He had some scripts that would alleviate some of that pain and make it a bit easier, but it still is a bit of a long, long haul. So, let's go back to Visual Studio. So we have, again, on start, we set up our audio source, and down here we have two clips, impact and rolling. So we have when something hits, and then we have obviously when the balls are rolling. So, naming. Remember, Jess's talk last night? The hardest thing is naming in computer science, right? Invalidating cache and naming, and it's so true. So try and make something, name something that makes sense. So on collision enters, that's when we hit something with the object. We then go and we calculate if we should play the audio clip. So that's the relative velocity. Does it roll fast enough? And then we have the uncollision stay. So that's when it stops, and then we still try to figure out, well, has it really stopped? Those are still going, type of thing. And then based on that, we play a video clip, sorry, video audio clip or not. And on collision exits, we just stop the audio source. So again, relatively simple code. The main thing again is to try and find audio clips that make sense. Now, this. Okay, now these didn't actually play audio, which I thought they would have. Let's try here again. And now it's frozen on me. All right. I'm pressing buttons. Yay. All right. Trust me, there is audio on it. I had to have one demo fail, of course. But I'll just keep going because the audio is, you click on something, it falls down, it makes a noise. These noises are sort of a bit crude. But what I was going to show you was that when it rolls down, you turn and I turn the emulator, the sound changes as well. So I haven't done anything to do that. So I'm going to go to say, when you turn your head left, make sure if the ball is rolling on the right, blah, blah, blah. There's none of that. It just comes out of the box, right? Okay. So please, any questions on any of this? Because that's kind of like, that's your super fast introduction to HoloLens coding. If, yeah. How is it to do things like WebViews or something like that? So how is it to build things like WebViews? Can I reword that and say, how is it to build 2D apps? Is that essentially what you're asking? Yeah. So 2D apps work really well because you can essentially pin them on the wall, right? And you, anywhere can become your screen. You can have them suspended in mid-air as well. And as you walk around them and see the backside, it's mirrored. Right? It's kind of cool. But anyway, it's not very cool when you need to work on it. So you can absolutely make 2D apps. And Outlook, so Mail, so the Windows Mail app, just called Mail, now works on HoloLens. There's a HoloLens version of it. And there was one more thing I can't remember. I think it was the Outlook or something. You guys might know. But there are Microsoft is building their 2D apps to work on HoloLens. Absolutely. And it's easy because they're all UWP. They're all Windows, Universal Windows platform apps. So you can take all your code almost and just get it and deploy it for HoloLens and have it in the HoloLens store as well. So yeah, you can absolutely do 2D apps. And I think there'll be lots and lots and lots of those because they're much easier to build. And why wouldn't you have, if you have a 3D modeling tool, you might need your Outlook there. It makes sense, right? So what's next? This is very early days. None of us are going to have a HoloLens, yet we're all here. We're all trying to find out what it is involved in building one. I hope it's not going to be like this because that would just be really awkward. But yeah, there are some social challenges because the truth is when you put one of these on, you can't help but look a little bit like a dick. I don't think it's a good look. And because you see everything else that no one else sees, you have this big grin on your face and you're starting to go, yeah, it's not a good look. So there's some social challenges for sure, but it's very early days, right? No one has one. If we see one, it's because, well, we geek, so we go, ooh, it's cool. We don't kind of think through the things that, you know, a mom will go, really? Actually, you go, really? So the challenges we have is to build software that's going to make sense and not be completely unachievable like this. The mass rover HoloLens experience. I couldn't build that. I don't have the mass data. Like, that just seems really far-fetched. But we need that to be inspired to build things that are maybe not as complex. Right? I think it's been more like some of these sort of toolbench type apps. So something where you go and you say, I can put my toolbox here and I can build things in real space. Actually, I move it over on this table because it's in the way. So you still have the HoloLens experience that I talked about earlier. Make sure that what you're building actually makes sense as a mixed reality app and not just a virtual reality app. So that's, I think this is what's going to happen a lot. As developers get their hands on this, I don't think it's going to be as professional as what Rene was building, the HoloFlight type app, because that's hard. That is a really impressive piece of software. So that's my theory, I guess. But the real cool thing is that now is your turn, right? Because all of these tools are free. Everything I used here was free. Visual Studio Community Edition, Unity for HoloLens, there's a dedicated HoloLens SDK for Unity. The emulator, all of these tools are free. And I would recommend though to find someone that knows 3D modeling, because I have no clue. I am not a 3D designer. I can't draw anything. So I'm relying on the Unity shop. So lots of 3D models you can actually buy or free ones from the shop. And there's many other ways that you can achieve or acquire 3D models as well. And basically let you imagine GoNuts, your imagination GoNuts. There's no limits almost, right? Because it's all virtual. It's a 3D world that you can build, but you can make it interact with your real world. And then we were speaking about devices before, and they are speeding up. So now Wave 3 has arrived. Waves are how they're inviting you as a developer to spend $3,000 on buying HoloLens. And it's tricky because I was speaking to various people of, can I just have one here and someone can hold it and just show it. And the kind of sentiment is well, we'd love to, but if it breaks, I can't get another one for one. There's no warranty on it, and I can't insure it. So it's very sort of, if you have one and you get access to one, you're still a bit privileged, right? And that's unfortunate, but that's why we have all the tooling. So, but they are shipping. Wave 3, I'm on Wave 4, as I said before, so hopefully I'll get invitations, get another mortgage. It's a bit like that. But I want to leave you with something that's hopefully will inspire you a bit, because we can't all be super fantastic 3D designers and modelers. I mean, most of us have a job as well, right? We need to actually make a living, and this would be something extra. So one of the guys that did acquire a HoloLens dev kit, thought, okay, what can I do with you? I'm sorry, I didn't catch that. She's very temperamental today. So a developer that did manage to acquire a developer kit built this. I just want to show you this. I think this is very cool. What I think I'm going to do, I think I'm going to just simply put a sphere in the middle of the bush, and then just go around it and see what areas are outside the sphere, and just trim them. And I don't know, let's see what happens. I'm going to try it out and show you guys whenever I get it done. So this is what it looks like after trimming. I think I did a pretty decent job. Let me show you without the sphere, which by the way is low polygon, so it's not quite spherical. So it's not that great, and I could keep going if I wanted to, but that's good enough. So this is what it looks like. I think it's pretty good. I just needed to grow a little bit more so I can trim it a little better, but I'm pretty happy with it. And that's how HoloLens can help you do gardening, right? How awesome is that? And it's simple. The implementation is really, really easy, great sphere, that's it, and you move around a bit maybe. But this guy had an idea, right? I don't know if I can get my mom to where I want to do that, that's probably the challenge. But it doesn't have to be a master over landing experience that you're building. It doesn't have to be a HoloFlight type experience, like you can do stuff that's really powerful without even having 3D modeling skills, just creating a sphere. But then the other side of things is that obviously we need more inspiration. We can't all build gardening apps, so that's not enough to get us going perhaps. So this is Microsoft Research. It's called HoloPortation, and I'll probably just let the video speak for itself, because it's pretty impressive. Maybe we'll play. Hi, today we're going to show you an exciting new technology that could fundamentally change the way that people will communicate in the future. Imagine being able to virtually teleport from one space to another in real time. Hey Sergio, how does it feel like to be HoloPorted? It feels great to be HoloPorted. So Sergio is to wear his HoloLens device, and I'm going to wear mine. We can see each other in full 3D in real time. We can interact and communicate as if we're co-present. Sergio, can you walk around my space? Can you walk around this chair? So we're doing everything to give the impression that Sergio and I are present in the same space. Sergio, let's just do a high five. That's great. Thank you Sergio. We call this technology HoloPortation. That's freaking awesome. Wow, obviously this is not for us necessarily because there's about a squillion krona worth of equipment around them. And they have an internet connection that is certainly not in Australia to manage all this data. So there are limitations for us, but that should deal with this to give you inspiration on what is possible. Now, before I just go on the last slide, any other questions? Yeah. So there is a lot of things like on the table on the walls, and you saw these nice libraries. Is there something which tells you where the table is? Is like where the supply is? So the question is that we saw how we can place things on physical objects, but is there something that tells you where the physical objects are? Why do you need to know that? Well, if I want to make an app of which I expand on my table. Oh, right. So if you want to have your app, just like we saw with the Minecraft example, he obviously looked at the table because he was very focused and went, great world. And that's, so the HoloLens will see that there's a surface and then it's up to your app. Well, your raycast to see, well, and what I'm looking at is that adequate to what my command is, I guess. So you need, there's a bit of logic there, obviously, to make sure that that surface is either big enough or adequate. But you can have surfaces on an angle as well and things will fall off. Right. If you give them, you can anchor them as well and there's all sorts of stuff you can do. But you shouldn't need, your app shouldn't need to know where physical elements are as such because it should work anywhere, right? It shouldn't matter what the room is. What your app should do is be able to say, oh, I'm interacting with something in that room. Is it possible to do whatever the user's asking? I think that's probably a better way of looking at it. That make sense? Yeah. Anyone else? Yes, sir. I had pitch, what was it? Pitch, roll, and heave. See, I don't know enough. So the question is, there was, was this in Unity? Long time ago. Okay, but there was somewhere I'd said pitch, roll, and heave, I think. And I don't know. I haven't actually seen that. Sorry, I don't know. I'm just like you guys, I'm learning this stuff. But I'll look it up. I want to find out now. Not right now. But anyone else? Wow, almost Norwegian. Before I leave you, I just want to mention PubConf. Tomorrow night at pubconf.io is all the details. I'm doing a five minute talks with lots and lots and lots of animated gifs. It'll be amazing. There's about 12, 14 speakers or something doing five minute talks. The idea is that slides automatically rotate every 15 seconds. Which would be very interesting. Bit like slide karaoke, but almost not. Thank you very much guys. Appreciate your time.
Virtual reality and augmented reality are terms most developers and technical enthusiasts recognise. What about “Mixed reality”? A reality that is part real world, part digital world, a reality that is enhanced with Holograms. Microsoft’s HoloLens introduces users to an entirely new way of thinking about computing. Learn how to use the Holographic Development Kit (HDK) to build completely new experiences that will blow your mind (only figuratively, I hope), and get a sense for what is possible. You will be challenged to think of how to solve problems in an entirely new way that leverage holographic projections, to provide intuitive natural interactions with the digital world.
10.5446/51723 (DOI)
Nice to see you. You weren't even supposed to do this today because I've already done a two-day workshop in the beginning of NDC, the pre-conference. I was asked just two days ago if I could do another hour because they had some free slots. So here we are. So I hope I haven't prepared too much, but I hope you'll be able to get through this in just one hour. We have been struggling to find a good title for this because we called it Core Agile, and everyone thought it was some kind of introductory course, scrum course or something, and it basically isn't. So we just have to tell everyone what it's not. This is how to be agile without scrum or kanban. I've made this together with two guys, one guy called Renet Kamei and a colleague called Rua Yttraida, but they're not here today, so I'm going to do this alone. Who am I? This is me. Last year. Lots been going on. Now, this is me, like, I think I'm 15. And the reason why I show this picture is just not just to show you how wrong things can go, but it's to show you that I'm not that guy anymore. For me, what I'm going to tell you is not natural. I didn't want to be a manager and I didn't want to work in teams. I'd like to work alone. So everything I tell you now isn't something that came natural to me. I've had to learn it. I have to read a lot of books to find out what's the right way to do this. So if you go out here and say, ah, he can do this because he's born to do this, I wasn't. I had to learn a lot. So as you can see, I'm not that guy anymore and then everyone can change. Every day, when I go to work, I go to work in a company called Webstep. It's a senior consultancy here in Oslo. And the only reason why that's important for you is to know that we have 90 percent of our staff is production and only 10 percent is administrative. So the 10 percent that's administrative has to be fairly efficient. And a lot of my thought work into how we build efficient teams is what we present here today. This is my colleagues. It's a few years old, this picture, but it shows our spirit, happy people. And for me, because of the way we, what I'm going to talk about and how we do things in Webstep, this is also my extended friends and family. We used to just be colleagues, but now I see them all the time in my spare time. And I've said goodbye to some of my old friends just because I've got so many cool new ones that I've been able to hire. So this is the guys. When I started working in Webstep, I'd been working in Microsoft for five years. And before I came to Microsoft, I worked in a company called Objectware. I think it's called ITERA now in Norway. And we used to be in the forefront of agile development. And back then, there was something called RUP, Rational Unified Process. And when I came back, everyone was talking about agile and scrum. And to be honest, I couldn't understand what it was. I did a phone call to someone, a reference call to someone we planned to hire. And they said, oh, we, I work with it every day. We do standups every morning. And I thought, wow, that sounds so much fun. But it wasn't standup like that. So I couldn't understand what scrum was. And I heard the word agile mentioned a lot of times. And whenever I heard scrum from a customer, it was because we couldn't get it to work. We have implemented scrum, but we haven't become agile. So I was starting to think, do I really know what the word agile means? Because it sounds very rigid. But for me, agile is something positive. So I had to look it up. And agile, it means a lot of positive things, active, adaptable, easy moving, energetic, quick, ready, sharp. And this isn't very rigid. So I think I was starting to think maybe the way people do agile is wrong. Maybe it's nothing wrong with the agile thought work. So then I had to Google the agile manifesto. And to my positive surprise, it was only four points, four bullet points. Do you have any relationship to this four bullet points? Do you know them? Yeah? I have to read them to you. Because I'm going to come back to them a lot of times. It says individuals and interactions over processes and tools, that's the first one. Working software over comprehensive documentation. Really nice. Customer collaboration over contract negotiation as a sales guy. Yes. And responding to change over following a plan. Four easy bullet points. Yet when people decide to get agile, they kind of miss a lot of the points. A lot of companies, when they decide to go agile, they do like this. Individuals and interactions over processes and tools. What do we do? Yes, we implement some processes and tools. I've never seen companies actually do a lot to make their individuals work better together. Then you talk about working software over comprehensive documentation. What is working software? A lot of people interpret that as it's bug free. And I wouldn't say working software is bug free. You can get bug-ridden software actually solving problems. And you can get software that has no bugs, so that doesn't solve a problem at all. So for me, working software is software that actually solves the business problem. Customer collaboration over contract negotiation. Okay, we managed to lose the contracts. But customer collaboration always says we don't do it too much. It's hard to get them engaged. So agile and scrum has become something for the developers and the development teams. A lot of places, it's really hard to get the business side on the onboard. And when it comes to responding to change, because the business side often isn't on board, we just have to build what they think they want. And when they say, oh, we don't want that, what do you want? We don't know. Not that. And you have to guess until it's close enough. I call that assumption-driven design. I'm planning to write a book on that, how to make best guesses. So what do we think? We think that when it comes to individual cell interactions, you have to go down to every individual, get to know them, what they're good at, what they're bad at, what they like to do, what they don't like to do, and get them to relate to one another, get them to communicate in a good way. Coming back to all of this afterwards. Working software. When I think working software, I think it's a massive transfer of knowledge from the business side over to the IT and from IT over to the business side, so we can understand what they need and they can understand the potential of what we do. And if we want that done, we need to work on how we communicate with each other, because that's a big problem in a lot of places. Customer collaboration, that is built on trust. We need to gain the trust and we need to make it into an actual cooperation, not just that they trust us and we can make whatever we want. We have to cooperate from A to Z. And the fourth point, responding to change. Sometimes you have huge change. We had this customer last year that they got a new owner and the project was stopped, but that was the first time in nine years. So that big change doesn't happen that often. When it comes to responding to change in my book, it's about evaluating what we do and making a little better at the time. We'll come back to all four points. Let's first start off to see, did these people know what they were talking about? Because we can look at the manifest, but we have to see if it's, if we can scientifically not prove, but at least see if they had the point. A lot of studies show that they knew what they were talking about, but I like this one best. This is from MIT, where they try to find out why do similar teams perform so differently. Some of them perform really well and some of them perform really bad. And it's very hard to see from the outside why is it. So they looked at teams both in similar industries and across different industries, like innovation teams, like post-op teams in hospitals, like customer relations teams in banks, and so on. I think it was around 2,000 people involved in the study. And what they did, they collected a lot of data about each team, like their age, their education, their IQ, things like that. And then they put this small chip on them, an electronic badge that collected data on how they communicated, the tone of voice, the body language, if they were facing each other when they were talking, stuff like that. They actually sell these badges now, and I badly want one, or not one. I have to have more than one. What they found out? They found out that 50% of the reasons why some teams were performing better than others was all the other factors. There were intelligence, their personality, their skills and education, and what they were discussing. But the other 50% were just how they were communicating. And how they were communicating, you can see here, three points. One was the overall energy level that they had, they were engaged in what they were working on. And then they distributed that engagement fairly evenly. Within the team, there wasn't like one or two guys talking. They were talking fairly similar amount of time and with the same energy level. So there wasn't just two guys contributing, there was all of them. And then they were willing to explore ideas outside their team. They went to other teams, other industries, books, conferences like you do, to get inspiration to do their work even better than they did. Always finding new things that could be proved. I like this. Then let's see, I just want to say something about the size of teams because at least in some companies, I think that the more, the better. And I think not. The size of the teams is really important. Because the effect of each member you put in is not linear. If you have one and you add one more, you will probably get out more than one. But if you have ten and you add five more, no, you won't. So I've read a few studies and it seems like they conclude that six is the optimal size for a development group when it comes to developers. And I also think six is a nice size for any group, actually. Why is this? A French guy, he called Maximilian Ringelman in the middle of the 18th century. He put this into words. He says it's a phenomenon that occurs when you increase the size of a group. You will see that each and every member will contribute a little less every time you put one more in. And this is really hard to counter. You can see this in every industry and in every line of work. He found, he did some experiments in toe pulling. And you should think pulling a toe, that would be fairly easy. But the more people, the less they pull each and every one. So the individual effort goes down. And then they've added this more recently that the need for communication goes up. And that's also very dramatic as we will see. Why does the individual effort go down? I think that's because of something called the bystander effect. There's been a lot of studies on this. Let's say that if more than one person can take responsibility for something, there's a less chance that anyone will. And that's why if you drop dead in the street, there's only one other guy there, he will probably try to help you. But if there's 10 people there, probably not. Because they will think, okay, it's not my job. Someone else will do this. There must be someone here that's a doctor more qualified than me. It's the same in work. Okay, I see there's a problem there. But I know that the guy over there is better than this than me. So he can do it. Maybe you're not quite sure if what you're seeing is wrong. So you don't want to make a fool of yourself. And sometimes if you fix something and make a bigger problem out of it, you will get in trouble. So this is something we have to work on to get people to understand that the problem is something I have to deal with. Also in development projects, if you have like five people sitting in a room working on the same code base, it's fairly easy to communicate. You can just turn around and talk. But if you have a bigger project with like two, three, four teams, you have to start from scratch and do everything right from the get go to be able to not have all this communication between the teams. Unfortunately, when we start off, it's very often something that's already made. They didn't think too much about this when they made it. And we have to make a work on the same code base. Suddenly you'll see the communication patterns more like this. You spend so much time trying to communicate with the other developers. There's done a study on this in the US from the Quantitive Software Management Group where they studied, I think it was 564 projects. When they looked at projects delivered with less than five developers or more than 20. And they chose a metric with 100,000 lines of codes. It's not very intelligent, but it's a metric. And what they could see is that the big team were actually able to do this in less time. 6.92 cardamom ones and the small team 9.12. As you remember, the small team is less than five and the big team is more than 20. So the cost of the big team had an average of $1.8 million and the small team $245,000. So this study is fairly conclusive. If you want to do something more effectively, maybe you should try to do it with less resources and not more. And not all agree with me. But I don't care. Okay. So how do we do this? I think the first thing you have to do is to include everyone that knows something that you need in this project. As I said, I think a lot of scrum teams tend to only include the developers and the project manager or the scrum master. But you need to engage all the others and get them to be an integral part of the team. They don't have to be there all the time, but they have to be available, maybe have some physical proximity. So you also have operations with the users and the management or product owners. I think this is something you all know. But it's important to include them and also include them in the work I'm going to tell you about. Let's do this bullet point by bullet points. I'm going to talk a little bit about relational skills. And this is the book I first read nine years ago from a Norwegian guy called Jan Spurkelam. It's really good. I would recommend all of you to read it because it will tell you so much about the background of how you can build relations and maintain them and stuff like that. Very good book. What he says that relational skills and its skills is not a talent. It's not something you're born with. It's something you can be taught. It's a set of skills that helps you establish, maintain, and develop relations between you and other people. I wasn't good at this at all. I hated chitchat and small talk. I didn't understand it. I couldn't wrap my head around how people could be sitting there for half an hour talking about nothing. So I would just shut up. I've improved. What do you think? No, nothing. No, it's not an improvement. And what is a relation? It's, you can see that's different levels of relations in the bottom. It's a two kind of relations you would try to avoid or try to develop. The bottom one, the dangerous relations. It's hard to do something with them. But if you're in a dangerous relation, you'll know because it's something, it's someone that makes you feel bad all the time. It might be your boss. It might be your boyfriend or girlfriend. It might be your mom and dad, brother, sisters, someone close to you. They have to be fairly close to you to make you feel bad all the time. But when you're in a relation like that, you will have to do something because it will be dangerous for your health. Someone nodding? No. So dangerous relations, try to avoid them. Then you have the exhausting relations or the irritating relations, as I say, is people that you don't respect. So whenever they start talking, you don't listen. You just want them to go away. Every time they say something, it's annoying. Every time they do something, it's annoying. And it's a person, if you had the choice, you wouldn't work with them. And if you're now sitting there thinking, no, I don't know anyone like that. Then this is you. And as I'm going to talk, when you have something around you who's irritating, it's you that gets annoyed. It's you that that person is just acting out like he thinks he or she thinks it's natural. It's you that gets annoyed and you're the only one who can do anything about it. We're coming back to that. So I think every exhausting relation you have, if you have to deal with them every day or from time to time, you have to try to move them up to the respect level, where you at least hear them out whenever they talk. You listen to them. You actually reflect on what they're saying. You don't write it off as stupid straight away. It might be stupid, but at least you show them that respect. If you don't do that, you will never have an effective cooperation. So you need to, everyone you work with, you need to try to lift up to the respect level. And I think also we would like to move it up to friendly, where you say hello and you smile and you actually think it's nice to spend some time together. You don't need to be friends at work to work together. But it's nice that also, but it's not necessary. This isn't about everyone going around hugging each other and smoking joints and being happy. This is being friendly. And I say it's nothing wrong by finding love at work. My wife and I work together, actually. But a little advice, just one at a time. Otherwise, it could be really complicated. Okay, so how do you work to gain the respect of not of others, but for others? This is something you have to do. It's, every person has some things that it's really, really easy not to like. I like to click on pens, make small noises. I got nervous hands. I don't know why. And that tends to annoy people. So people I work with, I just take my pens away. I don't have anything to make noises with. You find people like that have some kind of irritating habit. And it's so easy to make that into a really big thing. It might be that they have some insympathetic character flaw that you don't like and you make it into this huge thing that you don't see all the positive or they just might be selfish. And it's so easy to let that overpower everything else. But you also have to remember that every person has something sympathetic. And if you want to respect them, you have to maybe try to forget about the irritating stuff, let it go, and rather focus on the nice things to do, the nice traits they have. They might know something that you don't know. They might be good at something that you're not good at. Or they might be good at anything that you're good at at all. There might be some behavior that's sympathetic that you really like. And it might be something in their history that makes their irritating part less irritating. We did this workshop on Monday, Tuesday, and one of the participants said, you know, I had this guy at work that I really didn't like. And then I got to know his history. And then I understood why he was like he was. And he doesn't irritate me anymore. Very often that will solve a lot. So if you want to respect others, you have to look for the positive and try to tone down the negative. It's not that complicated. And you choose to be annoyed. It's like when you're in your car and I say, Q, you could be annoyed, but the Q won't go away, but just because you're annoyed. So you just have to deal with it and make it into something positive. Okay, next level, how do you establish and maintain a friendly relation? First of all, you have to do some kind of initiative. I'm not talking about inviting them over to Christmas dinner. At least don't start there. You can start by saying hi. You have to have some kind of social intelligence, but it's not that important. Social intelligence is something you can learn. And some social intelligence is something different in different people. So it's not that important. But you have to be able to show some positive feelings and you have to be able to show some positive curiosity. And this is not complicated. Positive emotions, what is that? You say hi in the morning. It's not that difficult. But when you meet someone, they don't say hi to you in the morning. Do you like it? Yes? No? Do you feel a lot of value when people just go past without saying hi? No? It's so easy. You just say hi. And when you said hi once and you meet them again, it's very hard to find out what you choose to say. And I end up saying a lot of stupid things to them, trying to be funny. But you could just look them in the eye and smile. It's not that hard. Maybe you could use their name if you know it. Maybe you could just give them a positive remark when it's granted. It's not more complicated than that, but we forget to do it all the time. And also when it comes to positive curiosity, this is not about prodding into their history. This is just being aware that your colleague or your friend or your spouse, whoever it is, they are so much more than the part you see of them. At work, you'll see the professional side. And you know that. You know what they're good at. You know what school they've gone to and stuff like that. But you don't know too much more. Each and every one has a history that explains how they have become who they are. And you might not be aware of your own history. And they might not be aware of theirs, but there's a lot of good that can become just of knowing their history. Each and every one of them have a family. And that family affects them a lot at work. They can have small children. And when they get small children, first they have to be away on a paternity or maternity leave. And then when they come back, they go into kindergarten. They're going to be sick 50 percent of the time the first year. When they go to school, they will start playing football, handball, bike riding and swimming. This is my son. And you will have to drive them around everywhere. This affects you also at work. You need to know this. They might be going through a divorce. They might have some sick parents. You have to know this and take it into consideration also at work. Most people have some interest outside work. And this is where you can find some common ground, something you can talk about. And, like I said, you don't have to sit down and drill into all of this. It's just asking the right questions. What did you do this weekend? What do you like to do when you're not working? Do you have any children? Do you have a wife? Easy questions. That is not too prodding. And I can promise you that this is something you will learn a lot of positive things about your colleagues. What we do at Webstep, because the people we hire, they're senior consultants, and it's really, really hard to find senior consultants that wants to move jobs. So we actually use this in interviews. We started using it a long before I saw this figure. But we've always done it, because we have to build relations. And we also want to teach them that when you start working in Webstep, we will try to learn you a little bit more about yourself. We can develop you further. And we've seen so many times people sitting in interviews, and they haven't thought about this before. They haven't thought about how they have become who they are. We have people crying in interviews, not because we're evil, but because suddenly it's emotional what they're talking about. So this is really interesting. And you should try to do this, not even at work, but with your family and spouses and mothers and fathers. I know it's so much you don't know about them. So what we have done actually is combine this with beer, sit down, 45 people, and say, tell me who you are. Tell me about your 24-hour human, as we call this concept. And whenever I've done it, done this in training so many times, and we say 15 minutes each, and you know it's going to take at least 30, because people, they like to talk when they first start talking about this. There's just so much to tell, and so much they will remember that I haven't talked about for years. And it's so many cool stories I've gotten out of doing this. I had this colleague who said, oh, I haven't told anyone about this, but when I was in the Army, I accidentally burned down military equipment for 200 million Kroners. Okay. This is not just about being nice. This is about trying to get most out of each and every colleague. And if you try to learn a little bit about them, what they know, a little bit about the history so you understand them, what they like to do, what they're good at, what they're bad at, and what they don't like to do, you can try to cultivate the positive. You can cultivate what they like to do, and you can try to tone down what they don't like to do. Because if they, it's a good at something you actually need, you can just do as much as possible of that. And if they're bad at something, you don't want them to do it. Just get someone else to do that. This is a key part of how I built my management teams all along. We never hired Gollum, but I would have had the same haircut if I had hair. Let's go further on to customer collaboration and working software, because it's fairly hand in hand. When it comes to trust, trust has five dimensions. It's integrity, and integrity goes by being honest all of the time. And honesty is not just answering honest when someone asks you something. For me, it's something to do with openness as well. So you're honest without them asking. They don't have to ask, because you're already told them. Information is free for everyone. For me, that is integrity. It's competence. And competence, if you're working in an IT, you know, if someone is good at what they do, you will trust them. But I think also when it comes to being agile and working with business people, you also have to be able to explain what you know to them so they understand. And IT people tend to go a lot into details. And I think you have to concentrate on what matters to them and what they're able to process. Because when you go too much into details, they will just fade out. I will come back a little bit more to that. Being in a consistent mood builds trust. If you're very moody one day without any reason, and the other day you're really happy, you might want to call a shrink. But it won't build trust. So you will try to have a consistent mood if you want people to trust you. Loyalty is important. If someone screws up at work that you don't condemn them straight away but that you back them up and say, everyone could have done this, let's try to learn. But it can also be that someone at work is sick, they have someone at home that's sick. Something might happen that makes other people turn around and don't talk to them. We had a guy at work who got cancer and none of his friends came to visit him in hospital. We came every week. Whenever something like that happens, I always think that this is a unique opportunity for me to build a stronger relation to this guy or girl. And they will never forget what you do for them when times are bad. So if you want to build a strong relation and loyalty towards someone, you just have to be there when they need you. Openness, as I said, when it comes to integrity is one thing. But when it comes to sharing ideas, it's really, really important if you want the team to be effective that you share your own ideas. You can't just sit down and expect everyone else to share if you don't share yourself. So if you want others to share, you have to go and have a good example. Share all your ideas and no matter how stupid they might be, a lot of really good stuff has came from stupid ideas. If you want your team to communicate, well, you need to work on how you talk to each other. And we say that it's three forms of conversation. One of them you see every second year on TV, the debates. And the debate is for me, I don't watch them because I think they're stupid. It's more like a verbal dogfight. They try to win. They try to win the debate. And it's in the papers next day, oh, he won the debate because he talked louder than the other ones. And it's no intelligence going on. It's just claims being thrown out. They don't even answer the actual questions they get. So for me, a debate is not the way of communicating. And a discussion, that's where most of us are most of the time, it's also kind of a verbal fight. It's not many questions. We try to win it whenever we're in a discussion. Oh, I won the discussion. The other ones lost. I won. I had the best arguments. And especially engineers like to discuss things. I saw this thing on Facebook saying, discussing with an engineer is like you're mud wrestling a pig. After four hours, you realize the pig enjoys this. And engineers like to discuss. But I think you have to be aware when you discuss not too much intellectual activity going on. Because you don't ask any questions. You don't try to learn from the other person. You don't try to build on the other person's ideas. And for a company, that means that you're not exploring the best ideas. You're just exploring the ideas of the guy who is better at talking. So I would urge you to try to reach to the dialogue that I really like. Like the dialogue, that's a win-win situation. Where you ask a lot of questions and people come up with a suggestion. You reflect before you talk. You try to understand the meaning of what they're saying. And there's a high degree of empathy and intellectual activity going on. And you'll actually learn something. And I think the dialogue is necessary if you want to make the best solutions. Then we have to see how do we listen. We say it's two kinds of listening. It's intentional listening. And correctional listening. Intentional listening is where you try to find meaning in what the other person says. Try to dig for the gold in what they're saying. It's some kind of wisdom in there, but you don't understand it now. So you have to ask questions. And then you have correctional listening, where you listen for all the details and try to find some flaws. You're like a human computer. Try to find out what's wrong. And you don't actually listen to what's being said. And this is what I call engineer listening. I overheard a conversation between, or I wouldn't call it a conversation, a monologue between two of my colleagues a few years ago. And this guy was telling this really interesting anecdote. And then he used the word if-for-hold-till. Wrong in the last sentence. And the other consultant's compiler just clicked, no, no, no. That's not right. You can't use the word like that. So you just had to start on the top and tell the story again. And it's not a very good way of communicating with others. The problem is that you're supposed to make logic out of this. So you have to turn the correctional listening on at some point. But you don't have to do it straight away. You can turn it on in the end. Try to understand first. I also say it's different levels of listening. The most stupid way of listening is what I call internal listening or moderating law listening, where you only use your own mental model. You hear some information. You run it through your own mental model and the correct answer comes out. You don't consider that. It's more ways of thinking about this. It's only the one way you have learned. And I have a moderating law who is like this. And I've spent a lot of years now trying to reason what happens from when she got some information until she reaches her conclusion, because it's a very interesting journey. Not very logical, but interesting. And this is not a very intelligent way of listening. Level two, you at least focus on the one talking to you and try to understand what is he or she trying to say. Try to imagine their own mental model. What have they been processing in their head to say what they're saying? And level three for me is when you can use more mental models than you're in. And there's, and you look at the words and everything. It's not easy to do the global listening, but at least try to get to level two where you not only think about how I interpret it, but how will that person talking interpret this as well? This is a bit high-flying, I know. When it comes to working software, there's always been a big divide between developers and business, and it probably will always be. But I've been working for 20 years in the IT industry now, and it's been the same all the time. But some companies, they succeed. But in a lot of companies, they say that the IT, they don't understand the business, they don't understand how complex IT is. They think everything is easy. They make these plans and they say, oh, you have to finish this in six months. But, hey, this is not possible. We've seen this a lot of times. But also that because they don't understand IT, they don't see the potential of it. They don't see what we can do to improve things. And also when they came, come up with their demands, it's often very fluffy because they don't understand IT. They have just said it the way they would say it. So you need to dig into it to understand what they're actually, what they want to make. And a lot of times, they don't really know themselves. They don't know what they want to achieve by doing this. And also, in Norway, this, I don't know the English word, customer travel, kunderäise, it's something new and very popular. But a lot of times, there's no connection between what they want to make and the systems you already have. So it's like, we know where we're going, but we can't start here. But that's a problem. You have to start here. So there's no connection between the strategy and your actual solutions. And the business people, they're also frustrated because as they say, the IT people, they don't understand our business. And a lot of them say, we're not an IT company. We have a customer that 99% of all the revenue runs through their IT systems without being touched by human hands. So there are around 300, 400 employees for the 1% of their revenue and they're not an IT company. Interesting. I think nowadays, all companies are IT companies. And they say we don't speak the same language. And I think that's a lot of the time because they try to explain something and the others try to find all the flaws. That's why I said we have to communicate better. And that's why they say they jump straight to details. I try to understand this myself and they go straight to the details and see all the problems. So it's still a big divide. And I think what IT people need to understand that when we try to understand our concepts to the business people, we have to speak in terms that mean something to them. I had this colleague, a brilliant engineer and architect. And I think this is one meeting we had opened these eyes because I said, you have to come with me and explain test-driven development to this customer. Oh, I can do this. Who's going to be there? Ah, a few project managers and the rest are business people. I can't understand. I can't explain this to business people. Why? Because it's something technical. Yes. But these technical things, what does it do? What's the outcome? And then we had to explain it to them. Test-driven development over time, it will take down your cost, it will take down your risk, and it will take time, down the time it takes, take from you, from you, start with the project until it's delivered. Ah, the ancestor in development is a good idea. They wanted to do this. So you have to speak in terms of matters to the business people. And these four bullet points are fairly important for them. So if you could try to translate the solution you're suggesting into something in line with these four points, you're on a good way of getting things done, the one like getting things done the way you want them to be. This colleague, he told me yesterday that he, nowadays he works with this customer site where whenever we go to the main architect and suggest something, a lot of times he says, no, that's not a good idea. And then he takes this and goes to the market and business people and say, you know, if we don't do this, it will cost more. We'll do this. So now he's able to do things the way he wants more often by thinking like this. Funny thing is, if you're in a company with more than two people, how many people knows everything that happens in that company? None of them. You can imagine how it is in a company with like three, four, five thousand employees. No one knows how everything works. And for some recent organizations that tend to be complex. And for an even stranger reason they have decided to divide it into silos, functional departments. You have IT there, you have development there, operations and development isn't even together. You have finance, you have HR, you have sales, you have whatever. And why have they done this? This is not how the teams work. We have needs across these departments. I can't understand it, but this is how it is. So you know that each and every department, they don't have the overview they need to be effective. And also, you know, that the systems underneath the organization, they're also fairly complex. I've had the enjoyment of working with SIP for two years. And it's, yes, it's fairly complex. The ERP systems are difficult. And then you have industry specific systems that is fairly difficult. And then you have all these custom system made that you have to get everything to work together. So it's really complex. It's hard to have the overview. So we have some ideas that we've been trying out to find out how can you get some collective wisdom on how everything works. This is an idea I got from a former colleague that we use a lot in WebStep. He's a real designer. He's been working as a workshop manager for some years. And then he started to work in an Norwegian tax authority with a project. And he couldn't understand what we're doing because he couldn't understand the organization. And when he started to talk to people, it was fairly obvious that they didn't understand the organization either. This was the first time he did this. He calls it strategic visualization. But you can use it for basically anything. We used it to document the code base in the customer. So what he does, he gets key people from all over the organization into one room in the Norwegian tax authorities. I think there were 20, 25 people. And then he starts drawing the organization. He tries to understand how everything works, how the departments are connected, how the systems are connected, what departments use which system, how each system communicates with each other. And when he's gotten fairly nice overview, he can do this in maybe one, two, three, four workshops, depends on how big it is. He does it again. But then he just does it one-on-one. He has this huge drawing and he says, okay, let's go down to your corner of the organization. Can you refine the drawing? And when he's done this 25 times, he takes the new and refined drawing back and shows it to the group. And if they're finished, then they're finished. And then he makes it available for everyone. He prints it out in this gigantic paper. And they revise it regularly. And this has been a way that they have been able to understand their own business like they've never done before. The first time the Norwegian tax authorities could take an organizational map up to the department and say, this is what we're doing. And they said, wow, this is just so complex. We never understood this before. This is just one example. This is the regulation plans. What systems are going in and which systems are on their way out? Which systems have you started to remove and which systems haven't we started planning yet? I won't go into detail with this drawing, but this gives them a very nice overview of a very complex situation so they can see how long are we on our way to getting where we want to get. He can do this like with huge organizations or like with code bases. It's very flexible. But it gives you a good starting point and it gives everyone a good overview on how the business works. And when you get that, you have a situation this is how our company is now organizational. We also combined this with something called the business model generation. I think this is kind of like the Bible for a lot of startups and also for a lot of older companies that started using this. Because all companies are unconscious and they have a business model. But in a lot of times they don't use it or they don't follow it or they don't understand it. People working there have no idea what the business model is. So this is a way of seeing the big picture when it comes to the business model side. They have divided this into a canvas. You can download it on I think it's strategizer.com where you have divided into nine fields. You start off with your value propositions. What is it we deliver to our customers? What kind of value is it? And then you look at this is how we build relations with our customers. Either one-on-one or one-to-many. If you want to many, how do we do it? Is it adverts? Is it Facebook? Is it Twitter? How do we build relations to our customer? And then you look at how do we deliver our goods to our customers? Is it physical? Is it something that can download? Every channel we have, you put it on there. And then you have the customer segments. What kind of customers are we aiming to get? Is it just to jot them down? It can be many, it can be few. And then you go to your key activities. What do we have to do to get the goods in the store? What do we have to do to produce this? And what kind of key resources do we have to have in place to be able to do our key activities? And then you might have some important partners for you. It's important to give them up there as well. Then at the bottom, you put in our cost structure, what drives our cost and what drives our revenue. You can do this for a fairly complex business in maybe an hour if you have the right people in the room. And everyone in that room will be able to understand how we deliver value, how we earn money, how we use money, and what we have to do to make this work. We did this on Tuesday. We had the practice and we used the department store because that's something everyone knows. So we made our own department store with key activities and everything. And then we tried to innovate it. So we had some innovation going. Found out the new thing we could do. And we put it into this value proposition canvas. And you can map this canvas to the other one we saw. You put in, okay, what's the gain of doing what we're doing for us, for the customers? What does the customer has to do to get this to work? Do they have to register somewhere? Do they have to order online? What's the customer's job? And what pains can this cause? What could be the real pains if we do this? And then you say, okay, how do we, how do we, the gain creators on the left side, what can we do to be able to harvest the gains in this project? And what can we do to relieve all the potential pain? And what kind of products and services do we need to have in place to get this to work? And it's not that difficult. In fact, we were able to think of something that actually already exists in Germany. So we managed to innovate something they don't have in Norway. In a business we don't really know in two hours. We got an idea that's already running in another country and it works. It's lucrative. So this is a cool way of engaging everyone in the company, not just the business people. Everyone can join in on this. You don't need to understand too much of the business. You just, you can look at this from the customer side and see, well, I want this if I want to continue buying stuff here. It's not that complicated. It's a cool way of working. We've been doing this a few times and it's a lot of fun and it's really useful. The last point, responding to change. I think I have to remember this is an evolution. So we have to find the small things we can do. And if you want to do this, you have to make sure that the key people in the project is available. So they can give you input when you need it and also so they can test the functionality. There's so many times that we make software and they are, we don't have the time to test it. So we can't release it. And we work agile and we have to deliver it in waterfall because we released something we made like six months ago. And it's not right. We have to get them to test it frequently. But if they're nice and test it frequently, you just have to remember, if they give you feedback, try to implement this as fast as you can so they can see that whatever feedback I gave them, it works. They do something with it. So get that feedback loop running. It's important. But I think in projects, especially in Norway, we tend to be a little bit too nice with each other. We kind of tolerate that people do counterproductive things. And for some reason, we let them do it. We think it's nicer to not tell them, to let them continue wasting their life doing something stupid. We let them make the mistakes. And we also say that it's okay with delays in the project because I don't want to speak up. And the problem is that whenever some counterproductive behavior is being done and it's overlooked, people don't care, it kind of creates a consensus. It's okay to do counterproductive things. And the best people, the most motivated people, they don't like that. So what you might see is that the best people actually leaves the project or the job. I think it's important that everyone in the team knows that I'm responsible for correcting mistakes or pointing out mistakes. It might be something, someone there is more qualified than you to do it. But if you see something, you have to pinpoint it. We're talking about self-organized teams. And in self-organized teams, there's no boss. So everyone has to be the boss. Each and everyone in that team is each and everyone's boss. So you have to take responsibility and point out things that doesn't work. And when you do that, you have to give feedback. And you have to remember feedback. It's not something you do to be evil to someone, make them feel bad. You give it to them because you want to help them. And it's nice to help people. Try to understand that and remember that. And also remember that feedback is concrete. It's not you're doing a bad job and then you leave. If you think someone is doing a bad job, you have to tell them what you think is bad. And it's also fresh. If you give someone criticism for something that did last year, you will only sound bitter. You want it to be fairly fresh. Sometimes if it's a correction, you have to do it. You might be angry. And then you might want to wait until the next day before you give it because you want to be clear in your mind, but try to do it as soon as possible. And there's three kinds of feedback. People think it's two kinds, but it's three. You have positive feedback. You should probably try to give more positive than negative feedback. And there's also corrective feedback. And corrective feedback is something you give when you want to help people that is not doing what you think is the right thing. Sometimes they are doing the right thing and you're doing the wrong thing. So I always urge people to give corrective feedback as a conversation. You don't start by pointing the fingers and saying they're stupid. Start it off with a dialogue. Like, I see you doing that in that way. Why are you doing it like this? Try to understand how they're thinking because you remember level one, two, three thinking. Try to understand their model. But the worst kind of feedback is the bottom one, no feedback. Because if you give people no feedback, what you're actually saying is that I don't care what you do. It doesn't mean anything to me at all. And I hate not getting feedback. I would rather have people criticizing me all day long than not giving me feedback. So no feedback is for me the worst one. Giving people no feedback, nah, it's not good. I think you have to remember that great teams, they're not nice to each other all the time. You can see a lot of the most famous rock bands in the world. They break up after 10, 15, 20 years because they can't stand the sight of each other. And that's because they have negative relations. But they have been able to manage, do great things because they have been honest with each other. Patrick Lencion has written a few nice books, among others, five disfunctions of a team. I urge you to read that one. And he says, I like this quote, great teams do not hold back with one another. They are unafraid to add a dirty laundry. They admit their mistakes, their weaknesses, and their concern without fear of reprisal. And if you got that going, you got a very good fundament to make a team great. To sum it up, I have to say it's nothing wrong with Scrum or Kanban or any other agile method. But I think if you start by implementing that, there's a lot of things you miss. And I think if you combine what I've been talking about with some kind of agile method, you'll get a lot further. So if you do Scrum but also spend some time trying to get all the individuals to know each other, work on how they communicate, now you've got some words. Now you can say to someone that you're doing correctional listening now. That's not what we're doing. You can tell them that now you're discussing this is not the dialogue. We need to have a dialogue. You can use the words to adjust things. And you have to do that all the time. I've been doing this for nine years and we do it all the time. We have to adjust each other. It's not the same level now that it used to be. We still have to correct each other. You will have to work on building trust between not only within the development group but between the development group and your users, once using your software or ordering the software and how you communicate between, not only between yourself but also towards them and work on how you transfer knowledge. Because it can be so much good coming out of them understanding more of IT and what it can do for them but also for you to understand what the business actually mean. And also try to use the opportunity as often as you can to make small improvements in what you do. If you don't, the life will become fairly boring in a while. I love finding small things I can improve. Sometimes it's not improvements and I have to redo it. Any questions? Are you still awake? No? You can come up and ask me afterwards. I jumped it. I have an e-mail address. You can send e-mails to me. I also have a Twitter account. My blog is only in Norwegian but this is kind of what I've blog about. This has been, I've tried to make an hour's compendium of what we use two days doing Monday, Tuesday. So this is actually a class that Programmed with Wickeling, the guys behind NDC, is planning to run this autumn. And there's going to be a lot more lectures but we also want to take this out to specific customers or organizations and trying to work with actual groups not only random people wanting to take the class. So if you think this is interesting, you can just give me a call or a NAC Program with Wickeling until we put the training up. Thank you.
This is NOT a talk about Scrum or Kanban. The Agile Manifesto is all about communication, interaction, collaboration and building trust. Yet when most companies decide to go agile, they focus on implementing some rigid process that they don't yet fully understand, and they have a hard time getting it to work. We go back to the core of the Agile ideology. We will talk about how you can use relational skills to improve communication, get team members engaged and build trust. Agile initiatives tend to become a matter limited to the developers, and it is often hard to engage other parts of the organisation?. In this workshop you will learn what make teams work efficiently and the basics of how you can get the whole organisation to cooperate better. By building relations and trust between people, working on how they communicate and how they interact with each other, you will build the foundation of a truly agile organization. Takeaways: Most of us have experiences from both good and bad teams. It is often hard to point out why things turned out the way they did. After this talk you will know what make teams more efficient and increase your awareness of the things that make your team dysfunctional.
10.5446/51730 (DOI)
The trouble I find with admitting them is that you run into cases where it's not clear exactly what's happening, how things are grouped. And sometimes the compiler will tell you about this and give you an error and sometimes it will just do something. That's what I find. Anyway you need a space so you might as well put a parenthesis there. Okay, shall we start? So I'm Robert Burding. Thank you. One of the original ALANG developers, I now work for a company called ALANG Solutions and we do training, consulting and support in and around ALANG. That's about me. So I'm going to talk about LFE. It's a Lisp implementation running on top of the ALANG machine, running on the ALANG machine. So what this is going to be talk is about a little bit about background history to try and very quickly explain why things look like they do. There is a reason why it is like it is right. And that will get us into what we call the ALANG ecosystem and what that means. And then from that we'll go look at LFE and see how it affects languages you put on the system. So yeah. So the background, why it all started and now we are literally 30 years ago. This was a long time ago. So Ericsson had a switch called the AXE and it was a very successful switch. It still exists in some of their base station stuff. Now it's emulated and so on but it still exists and there is a central switch for it. It was a really good product but it took a lot of effort to develop and maintain it. So at that time I was working in a computer science lab at Ericsson and one of the things we were supposed to look at was how we could improve the programming of that type of application to make it easier and more efficient to do it. That was one of the things we were looking at. We are doing other things as well too. So some reflections here around this. We were not out to implement a functional language. This was not a goal at all for it. If those who have seen Alan might recognise we actually started off in Prologue so we completed a different world for it. We were not out to implement the active model. So we didn't even know about the active model. We read later that people said Alan implemented the active model and we looked at papers on the active model and said yeah we do don't we? That was not a goal. So the goal of the whole thing was we had this problem and we were trying to solve the problem. Trying to solve the problem. That was what we were after. So what was the problem? This is a description of the problem domain. This comes from, so Bjarne Decker was the boss of the lab and he later on wrote a thesis and this was some of his 10 points describing the problem. And if you look there, there is nothing about telecoms in the problem. There is nothing telecom specific in the problem. Now they have these 10 points. Some of them are more interesting from a programming point of view than others. So we had to be able to handle a very large number of concurrent activities. We were thinking telecom switches, you might have hundreds of thousands of connections, you might have tens of thousands of calls going on at the same time plus everything else the switch is going. So there is a lot of concurrent activity, we just had to handle that. We had timing constraints. Things could not take too long time, they had to occur at a certain time. So this puts timing constraints on the system. So to put it more simply, the system is not allowed to block ever. Whatever is happening, the system must never block. You need a support for distribution. That gets onto some later points about fault tolerance. If you want to make a truly fault tolerant system, you need at least two computers. There is just no way around that. You cannot make a truly fault tolerant system on one computer. That means you need some form of support for distribution. Interaction with hardware, yeah, software systems, so on. The other point is these things were expected to keep working. They would not go down. That is the continuous operation over years. You should not have to take the switch down forever, for anything. So you need to do software maintenance, upgrades and stuff like this while the system is running. Again, this was a requirement of the system. I think someone asked, when Joe Armstrong, another one of the initial developers of ALANG, he said the system must never go down. Someone asked him what is never. I think his answer was less than four minutes per year, which is about five nines reliability. Typically you want seven or nine nines reliability for the system, which is not long time. And it must be fault tolerant. If you want to make a system that is reliable, you have to accept the fact that you are going to get errors in the system while it's running, so you need fault tolerance. You need ways to handle errors, detect and handle them, and make sure the system does not go down. You might lose things in the system, but the system must never crash. So this was a problem domain we were looking at. And the interesting thing is what we arrived at afterwards is that this is not really telecom specific. If you notice there's nothing telecoms there. The closer you get to telecoms is talking to you, interfacing hardware, right? But that hardware can be anything. And this is something that came up afterwards. This was not an initial goal at all. So what then became ALANG and the system around it was designed to solve this type of problem? So we were working on not just the language, but we had ideas of the language, also ideas of how you would use the language to build the system. These things interacted with each other. So there's support in the language for doing things we decided we needed in the system, and the system requirements put requirements in the ALANG language. How it works, right? And that means some things are almost trivial to write in ALANG. I don't know if you've looked at ALANG, but if this concept of supervisors which keeps track of processes and restarts them if they crash and so on, you can write a simple supervisor in less than one page of code. Why? Because the properties of the language and the system interact with each other in a possible way. And that later came on to what became OTP. So OTP formalized a lot of these ideas, but the basic principles were still there for it. So in the ALANG OTP system, you've got the language which has a bunch of features for supporting various things that the problem domain and our views of how you would build a system around that. And they both interact with each other. And this comes to what we call the ALANG ecosystem. So it's a set of languages running on top of the beam, which is the ALANG virtual machine, and OTP. ALANG and OTP, the OTP system for it. It's a set of languages running on top of that. You have ALANG, of course. LXE is another one that runs on top of it. LFE I'll be talking with a bit more about today. We have a prologue and allure as well if you want to. Joksa is another Lisp also written on top of these things. And the thing with these is that if you implement in these languages, and if you don't do anything really stupid, it's easy to interact with other languages in the ecosystem. So you can write using one language, acting, calling things, or using things written in other languages, and they just work together. The rules are quite straightforward, to be honest, how you do this. But if you follow the rules, you get this feature for it, which means you are not locked into one language. If you like one language, use it. If you find something else written in another language in the ecosystem, you find some package, you want to take it and use it. You can mix them. You can write your system in multiple languages. That's the benefit of the system. You have this open interaction for it. Okay, if you're coming from Java, Java world or a.NET world, this is not strange to do this. But here, what you do when you do this, you get all the properties that went into the Alang design and the OTP design. You get these accessible from whichever language you choose on top of it. It's all there. So if you want to build systems with lots of concurrent activities, great. Do it. And lots. 10,000. That is deadly boring. 100,000 concurrent activities now starting to get interesting. A million concurrent activities now we're talking about. And this works. So one example is Phoenix. They were doing tests on Phoenix and they were running one million concurrent connections. My view of that is, of course, you can run a million concurrent connections on the system. That just shows they're not doing anything wrong. If you ran into a limit of, say, 100,000, I would sort of start wondering why. What have you done? You had a test for the million. Another case of course is WhatsApp. They came out and said they were running two million concurrent connections on one machine running one Alang system. There would be one Alang process per connection there. And they told me they were peaking at three million. Now that is a very bad design, but it shows it works. So you have all these features in it. Well, okay, why is it a bad design? Because if that machine crashes, you're going to lose two million connections, which is not really what you want to do. But it works. There was any problems doing that. And this is the type of thing you get when you come into one of these languages in the ecosystem that one of the features you get for it. And you can pick a language. It doesn't make any difference. You get these features for it. So it's a different world, say, from the.NET and the Java world. Yeah, it also means, for example, if you want to talk with someone else, you have another system. In this case, we're talking with the Alang ecosystem and we're now, say, looking at the JVM on it. Yes, if there are interactions, interfaces between these we can talk. And that means any of the languages running on the ecosystem can talk with the JVM, languages running on the JVM. And we actually have a slightly better interface as well, too, because one of the things that run on the JVM is something called Aajang, which is an implementation of Alang running on the JVM. And of course, that's real Alang. It's real Alang, which means we can run distributed Alang between the Alang system and Aajang running on the JVM. So we can an extra way of path getting across there, too. And it really works. So the base of all this is the beam. So the beam, that's the name of the Alang virtual machine. And the question is, of course, what is the beam? Well, it's a virtual machine to run Alang. That's what it's designed to do. That's what it's for. That both says nothing and it says a lot. OK. Well, it's a duh, right? Yes, that's the course of it. That's nothing strange. All virtual machines are designed to run something and have a set of specific properties and something. So how does this work? So what are the interesting properties of the beam that depend on the fact that it comes from its design to run Alang for? So the large base of these things, what we have support for in the beam are these lightweight, the massive concurrency. Yes, it's the one that helps you handle millions of processes for it. It does it for you, right? Its base is asynchronous communication. Everything internally in the machine is asynchronous. If you implement synchronous communication, you're sending two messages backwards and forwards asynchronously. So this whole concept, this idea of async, which is now relatively recently become very interesting, we've always had it used. Of course, you do it this way. It has process isolation, which is a base for the error handling. So if something goes wrong in an Alang process, you can just crash that Alang process and you will not take down the system and you will not adversely affect other processes running in the system. So it's quite happily you can crash processes. We have support for error handling. This is the support for detecting errors, for containing errors, and for writing code that can handle errors. There's support for doing that. The continuous evolution of the system, that in this case is the fact that we can load code dynamically while the system is running and it's very well defined what exactly is going to happen when you do that. So you can have production systems running and you're upgrading the code while the system is still running. You can do that. It's a support for the soft real time. So we had this requirement from the telecom side that things must never block. You have timing requirements of when things occur and how long they're supposed to take and these type of things for. Now we call that soft real time. If you're coming from a real time world, you would probably not consider that real time. From our point of view, if you miss a timing constraint occasionally, it's okay. It's not too bad, right? If you're coming from a real time world, that's an error. So we call that soft real time. And there is also built in support for SMP multi core. So the Alan virtual machine to be more quite happily grab every core and get hold of. It will use them all. It will spread the load over, do load balancing and all this type of thing completely automatically in the system. It's not something you have to worry about. You can control it if you want to. If you have need for it, you can limit how many cores you want to run on and things like this and how eager it is to balance and stuff like this. But otherwise, it just does it by default. So this thing about writing your application, depending on how many cores you've got and how many cores, keeping track of how many cores and changing the application because that is something we never have to worry about. And these things, they're all there, but you'll seldom see these directly in the language. Your language will know about them, but you'll seldom see about them. Then there's a bunch of other things, the properties of the machine, that you will directly notice in your language. So for example, all data is immutable. There is no mutable data in the system at all. You cannot mutate data. And if you tried to go in and hack it so you could do it, there's no guarantee the system would survive it. You might lose your changes. You might crash. It's not guaranteed at all to do that. You cannot mutate data. There is only a predefined set of data types. You cannot, there are no user-defined data types at all. It just doesn't work. I can talk later about why, but there isn't any. There's support for pattern matching. Remember, this was a functional language. Of course, you want pattern matching functional languages. So support for functional languages. Things like tail call optimization, there's support in the machine for that. Again that was a requirement. The VM has its own view of how code and modules work. And you just have to follow that. You cannot get around that. You can try and hide it, but it's still there. These will directly affect your language. We'll see some effects of this when we look at LFE. We don't do global data. We don't share. We don't do global data. That's it. You can try and fake it and there are ways that look global, but they're not. We just don't do that. These things, these are what you'll see in your language. These very much affect how your language looks. So I don't know. How many LISPs is here? Do I need to sell LISP? We'll do it very quickly. So why LISP? Right. Yeah. It's old. Literally, it's your oldest, I think it's your second oldest computer language. I think it's about the same time as Fortran. This code is from the 1961 LISP manual. LISP came out in 1958. And it's not that clear what's going on, to be honest, right? If you look at it. Well, yeah, once you know LISP, once you get used to parentheses, it's quite obvious. But we're defining three set operations. As a member, there's a union. There's intersection set operations. We've got LISPs. And at the bottom, the two calls at the bottom, they're just calls testing this. But it's not that bad. It gets a bit better today. Okay. It depends on your view of these things. These are the functions union and intersection defined in LFE. So yes, there's still a lot of parentheses. We're still list-based, but it's slightly easier to read than it was today. So more about why LISP. Well, there's one very simple thing here. Yeah, we've got data types. There's nothing strange here. We have numbers. That's okay. Yes, that's strange. That's nothing strange. We have symbols for them. So we've got the BERT, the more of, the of. If the size, they're just symbols. And the greater than, they're just symbols. We have that. We have lists, of course. It's all based around lists. So we can have lists of numbers. We can have lists of symbols. And we can have lists of lists containing numbers and symbols. There's nothing strange here. These are just data structures. Okay. There's nothing strange about this. It's just data structures. We can have a list that looks like this. It's the greater than size for lists, right? Now it's starting to, it's starting to look slightly interesting. But it's still a list. We can go a bit further and we can do an IF here, a list here, which is IF, with second elements, the size, the greater than size, and the third elements, the core, the list, bump it, and the other, the other ones, drop it. And now we're starting to get somewhere, right? This is going to get a very code-like feeling for the whole thing. And of course, the next stage is, of course, we have the list define test with the size there and this list structure. And this is still a list. But this is also a function definition. We're defining the function test, which takes one argument size, and if that is greater than four, we're calling the function bump it, and otherwise we call the function drop it. But this is just a list. Just a structure-wise is just a list. So one of the very nice features of lists is that it's homo-iconic. The programming language itself is just a data structure, explicitly just a data structure. There is no abstract syntax. This is, it is its own abstract syntax, which is very nice. And everything works around the basic principle. You have a list and the first element of the list, more or less, tells you how you're going to interpret this list if you want to interpret it as a program. So the define here, the bottom one, the fact it says define there means I can interpret this as a definition of a function. But I can also say it's just a list. There's nothing strange about it. It's just a list. This has some very nice features. A lot's changed. Not that much, but a lot has changed. It makes it very easy to make a programming language, well, to program a programming language, because the language itself is just a list. It's just its own data structures. So writing macros is just building data structures. And what the data structure looks like? Well, it just looks like itself. There's a lot of work done for it. If you want to, some people say it has a very simple syntax. Some people say it has no syntax at all, depending on how you want to work with it. You look at it, but it's very simple to do that. And there's a lot of code to draw from for doing things for it. And there are quite a few different versions of Lisp out in the world today. So yeah, so that bit general. So this one, trying to sell Lisp, it's fantastic. Once you get into it, you'll never want to go away, honestly. You'll either love it or hate it. So now we get on to LFE. So instead of trying to first describe what LFE is, we will try and describe what first say, what it isn't. It's not a version of common Lisp. It's not an implementation of Scheme. It's not an implementation of Common Lisp. It's not an implementation of Closure. And the reason for this is that properties of the virtual machine, the Alling virtual machine, makes it very difficult to implement these languages efficiently. You can do it. It's not the problem, but to do it efficiently is difficult. And we'll look at some of the examples of this. So what is it? So I'd say it's a proper Lisp. It's a real Lisp implementation. Yes, it's not Scheme. It's not Common Lisp. It's not Closure. But it is another Lisp. It's a proper Lisp implementation based on the features and the limitations of the Alling virtual machine. It runs on the standard Alling virtual machine. That was a design goal for it with no special machines. And it's not this. You just run it together with anything else. And it co-exists seamlessly with OTP and any other languages running in the ecosystem. So you can quite happily mix it. You can quite happily use all the features that other languages use around OTP to build systems. You can use the same exact features to build your systems in OTP and you can mix them together for it. There's no difference for that. That was design goals of the language itself to do that. It should be interact seamlessly with them. So I just plan to look at some of the features of LFE without going too far into Lisp here. If you have questions, I think I might actually end up ahead of time here. I was a bit worried about overriding time so I might have cut away too much. We'll see. If you have questions, please take them. So we'll look at a bit about the data types. We'll look a bit about modules and functions, how they look on the LFE side, how the Alling system, the virtual machines, requirements on modules and functions, how they map onto LFE. We'll look at something called Lisp 1 versus Lisp 2. If you're not part of the Lisp world, this is very strange. It's a sort of ongoing argument from about 30 years ago, 40 years ago. It hasn't been resolved. We'll look a bit about pattern matching and look at something at Macros. We can talk about other things if you have questions as well. So the data types. Well, here we have this restriction from the Alling virtual machine. You cannot define your own user-defined data types. There is no way around that. So we only have a fixed set of data types. We have numbers, well, integers and floating points. We have atoms, which are pretty close to Lisp symbols. We have lists, of course. We have tuples, which are pretty close to most other Lisp call vectors. We have maps. There's a hash map implementation for it. We have something called binaries, which if you haven't seen them, they are really cool or got some examples of binaries. We have a bunch of opaque types that are just part of the system. For example, when you start a process, you get back something called a process identifier appeared and that's just an opaque data type. You can use to reference that process and send things to it and get values from it. So yeah, so I'm not going to talk about numbers. Numbers are just numbers. There's nothing strange about numbers. Well, let's say the virtual machine, it supports integers. Built in big ins, there's no limit to the size of integers you can have. It just converts automatically. Floating points, they're 64 bit. You're IEEE, 64 bit floating points, but nothing strange there. Atoms, well, atoms, if you want to compare them to Lisp symbols, the atom is a thing with a name. It's just a data type with a name. That's its main property. LFE atoms only have a name. They do not have other properties. If you're coming from Lisp, you will find that Lisp symbols have other properties. They can have values. They can have property lists. They can have function definitions and these type of things for them and other properties as well. LFE atoms do not have that because that is something the machine does not support. You can fake them actually. I've actually managed to fake property lists, which works, but it's a hack, but it works. The only property an atom has is its name. It knows its name, its value is its name. There are no namespaces on the virtual machine. That's it. There is one namespace. All atoms are part of one namespace. This means there are problems implementing scheme, common Lisp, and closure, which have a concept of packages and namespaces. We just don't do that. We just don't have that. It's not in the machine. We can't do it, which means you can't do packages or namespaces. You could try and do name munging to try and hack that. You could say, if I want the name foo in the name bar, I could actually implement that as the atom bar colon foo. That would work until I start talking with something else in the system where that doesn't follow those rules, where the rest of the Alan system doesn't follow those rules, and suddenly my interaction breaks down. A lot of these things, I could do them if I didn't want to interact with the rest of the system, but if I didn't interact with the rest of the system, it would just be a toy. We don't do faking. We don't do that. That means we don't have the concept of namespaces. They just don't exist. Well, Booleans are atoms. The true, false Booleans are actually atoms, but that's nothing strange. The one I like is binaries. This was something that they're fantastic. I don't know if you haven't played with binaries. If you want to do protocols, come to the Alan world. They're just so easy. A binary, it's a bit or a byte data structure. It's a byte array. Everyone's got byte arrays. Biter arrays are very boring. The interesting thing with binaries is the interface. The first expression on top, the binary 1, 2, 3, that creates a byte array of three bytes containing like 1, 2, and 3. But what you can do with binaries is you can qualify what the segments, what each segment is supposed to be. The second one here, we're creating a new binary and we're saying the first one, taking the value of t, is a 16-bit field, little n-ion. We've got the bytes in little n-ion order of that 16-bit field. Then we've got two 4-bit fields, u and v. Then we have f, which is a float, which is a 32-bit float. Then we've got b, which happens to be a bit string with just a collection of bits. We write this down and that will build us a structure of putting all these things together. For us to write. This simple example is quite a lot of masking and shifting if you write it in C. I'll just write it down. That's a simple. Here's a real example that now it's starting to get interesting. This binary describes an IP version for packet header in one go. So I know it's a 4-bit version. It's a 4-bit header length. It's an 8-bit service type, a 16-bit total length. It's a 16-bit ID. There are three flag bits, which I don't really know what they do, but I think they're the ones I've seen always zero, but never mind. There's a 13-bit fragment offset. There's an 8-bit time to live. There's an 8-bit protocol, a 16-bit header checksum. We've got the source IP, 32-bits, and a destination IP, 32-bits. We've got the rest, which is just a packet. That binary description describes that header. If I was doing this, I could write this down and the system would build that packet for me without doing anything, which means if you're working with protocols, very often managing these protocols is just writing down the structure and let the system build it for me. Again, if you want to try and do this in C, I've done it. There's quite a lot of shifting and masking to get that work to work. Here I just write it down and the system will do it for me. Binary is really cool that they make interfacing everything else very much easier. There's a bit about binary. Again, this is something that's in the ALANG system. Any language on the ecosystem has this interface as well. It's just there. We're using it. Yeah, a bit about modules, modules and functions. The ALANG modules, which we use, which we have to use, they are very simple things compared to what packages or other languages might have. They have a name and they export functions, which are functions inside that module, which are visible from the outside. That's basically what they do. They can only contain functions. The module namespace is a flat namespace. There are no hierarchical namespaces, anything like this. You can fake it if you want to, but the basic level is just flat. Another thing here is that if you're comparing it to, say, Scheme and CommonList, or Closure for that matter, is that the module is the unit of code handling. When you're making code, you compile a module. The whole module, that's it. You load a module. You delete modules, but everything works at the module level. I cannot have a module and then add functions to it or remove functions from it afterwards. That just does not work. That's the whole unit of handling for it. Functions can only exist in modules. The system only supports code inside modules. We have a REPL in the LFE, of course. That allows you to define functions locally, and macros for that matter too, inside the REPL, but normally they don't do that. There is a feature here. There are no interdependencies between modules. It's a completely separate unit. I can assume in a module that other modules exist, but there's nothing in the system that guarantees this or checks it or anything like this. This gets back to the code handling. The idea here was that, yeah, if I want to update code, I can just load in a new version of the module and I can use that. I can do that at a per module level. That's why they're all interdependent for it. These are just features we just have to accept, because this is how it works. Then we could fake things, but then we would lose the interaction with the rest of the system, which is what we want. Here's a simple example of a module. This is an LFE module. We're doing a def module. It's called a RIF. We're exporting three functions, add of two arguments, add of three arguments, and sub of two arguments. This is defining the module. Then we've got three function definitions, add of two arguments, add of three arguments, and sub of two arguments here. These are very simple functions here. If you're coming from a common list, well, the function definition resembles common list. This is my background for coming through it. It does not look like closure. It's more like classical list before. A couple of properties here, again, is functions cannot have a variable number of arguments. I define a function with a specific number of arguments and that's it. But I can have functions with the same name and different number of arguments at the same time, different at different what we call arity, and they are different functions. This simple module RIF here defines two functions, add. One add of two arguments and one add of three arguments. They are two separate functions. If I call add with two arguments, I'll get the two argument one. If I call add with three arguments, I'll get the three argument one. If I call add with four arguments, I'll get an error because there is no add of four arguments. Again, this is something that's built into the system port. We can get around and fake this. We could do a more common list style of what we're just saying every function has one argument which is listed everything you call to it. But then that's not how the rest of the system works. So a lot of these features come in. There is a way around this which you can see later. So yeah. So another three module. It's very simple thing. It's very straightforward. It's just attributes. These are things like the export defining properties of the module. We have the function definitions. We have macro definitions. And we can do compile time function definitions as well. These are functions that exist during the compilation time which can be used in macros for macro evaluation, for macro evaluation. And macros can be defined anywhere in the module. They just have to be defined before they use. There's nothing strange about this. But we don't have things like global data. So there is no module data or anything like this. There are no module variables I can set and access. They just don't exist. Again I can fake them but I haven't bothered doing that. So now we're getting onto the continual debate in list one versus list twos. Depending on, pretty much depends on where you come from, which one you like. So this, what this is all about for those who don't know. It is how, if I'm looking at a symbol, what property of the symbol am I going to use in different times? So if I look at the thing in the middle, foo 42 bar, that's a call to the function foo with first argument is 42 and the second argument is the value of the variable bar. Now the question is, how do I get the function definition of foo and the value of bar? So in list one, for example, scheme, the variable has one property and whether I'm using that in the function position, that property I'm assuming is a function definition, I'll use that. If I'm using the value definition, I'll use the same property there but then it will be the value. So we just have one thing. So in both cases here with foo, I'll take the value thing to get the, well I'll use the function definition, I'll get the value of bar as the argument. It's just the value. In list two, we have two properties. So we have a function property and we have a value property. So in a list two, in the foo here, I'll get the function property of it, the function cell of it and I'll use that as the function and in the bar I'll use the value cell to use that as the value. Actually in classic lists, they're probably about list four, list five or something like this because they have other features as well but these are the two ones here. And there's a long, this has been going on for probably about 30, 40 years, the discussion which one to do. So scheme, if I remember correctly, closure, they're list ones. They just have a value cell whereas common list and ones before that, they have two. They have a function cell and a value cell. And which one you take depends on how you're using it. List one are generally considered to be cleaner, more pure. So what do I do here? Well I went with list two and this is the reason for it. I can define something like this. I can define foo here of two arguments and I can define foo of three arguments. So I've got two foo functions at the global level. Now if I was doing a list one, I would do a let here, a let baz equals some lambda and I could call that locally. I could define that local function, I could call that and be happy with it but I could call two functions foo, two arguments and three arguments. But I can only have one local function because that's what the list one gives me in this case. And I found this to be inconsistent. Why can I have two functions foo in one place and only one function baz in another place? So I went with a list two here. So we have a specific F let for defining functions. So I can define two function bindings for baz here, one with one argument and one with two arguments and they're two separate functions. In the same way I've got two foo functions here, one of two arguments and one of three functions, three arguments. So I can call baz, I can call foo of two foo of three, baz for one and baz for two. I just found this to be more consistent to do it. Again it depends where you come from, how you do this. Generally list ones are considered more clean but I found this to match better just to be consistent for it. So yeah, I'll do that. So because LFE and alling functions have name and identity, I thought a list two fits better. If you look for example of joxer, they just have list ones in this case. So LFE is a list two or rather it's a list two plus because I can have lots of functions with the same name and different areas for it. We're giving examples of two here but I can have lots of them for it. I call it a list two plus. We do pattern matching of course. The virtual machine supports pattern matching, which functional language does not have pattern matching? None that I'm worth talking about anyway. Well, I mean if you haven't tried pattern matching, once you get into pattern matching, you'll never want to leave it. It's a flip. It's a psychological flip to do it but once you're in it, you'll never want to get out of it. That's why things like C++ and Csharp are getting patterns because it's just such a nice way of working with things to do that. And the virtual machine supports, there's direct built-in support for doing that for compiling down for it so we might as well use it. And we use pattern matching everywhere. This is how you bind variables. We'll see we'll have function clauses. Our let's case receives all use pattern matching. We have macros for list comprehensions and things like this where we all use pattern matching. Use pattern matching everywhere. This is how you bind variables in pattern matching. So let, for example, just doesn't take a value. It takes a pattern and an expression. It evaluates the pattern expression and matches that against the pattern, extracts the values and binds the variables. We use pattern matching. You can use one variable, of course, we can use patterns. You can pull things apart for it. We can have multiple patterns. We have a case which is pure pattern matching. It evaluates an expression. Then you can match against the patterns from the, using patterns to match for turn value to choose which clause you want to do, which expressions you want to do. Straight there. Receive. Receive is how you get messages, get hold of messages that have been sent to your process and how do you do that? Well, you use pattern matching to be selective which messages you want to see. We use pattern matching everywhere. We can define functions. So we saw some very simple function definitions, but we can have functions which we call what we're having multiple clauses and there we can use patterns to select which clause you want to do. Just extending the pattern matching again. This is nothing strange here. Again, if you're coming from a functional language which has pattern matching, you will recognize these cases. It's just how it's done. Yeah, I forgot to say one thing. If you're looking at the Allang world, one difference we have with variables is that we have variable scoping. So we have let's and inside the variables defined in the letter just valid inside that let body. So we have variable scoping which the Allang language does not. LXE has a more limited scoping. It has sort of scoping but not quite as much. We have quite strict scoping. That I personally think was one of the errors we made with the Allang language not to put variable scoping in. You get some strange behavior for it. We got the con, of course. We got the con. But we've got the normal con with tests but we also have a test which is a pattern matching test. So that's the one with the question mark equals here. That evaluates expression, tries to match that against the pattern. If it does, we'll choose that body. Otherwise, we'll go on to the next one. We even got con pattern matching inside con. We've got comprehensions with con pattern matching as well too, of course. Both list and binary comprehensions. We can write functions using pattern matching. So here's the Ackerman function and we're using pattern matching. Now I've put square brackets around the arguments. That is just exactly the same as using parentheses. It's just an alternative syntax for lists which comes from scheme. So some of the syntax has been borrowed from scheme and some has been borrowed from common list. This just makes it easy to read. There's no difference between parentheses and square brackets. So this pattern, what this says is, okay, we have Ackerman with two arguments and if the first argument is zero, we will choose the first clause. And then we'll just return plus n1. If the second argument is zero, but if the first argument is not zero and the second argument is zero, we'll choose the second clause, which is matching against the values, and then we'll just call Ackerman recursively. And if neither of the clause matches, so neither of the arguments is zero, we'll call the third clause and do that. So here we're just very simply using pattern matching to select which clause we want to do. And this is very functional. We can define the member function. Test of an element is a member of a list. The second example here is more classic style of writing member using cons. So we've got an x, we've got a list of elements, and we check is the list of elements equal to the empty list, then we return false because we couldn't find it. If the element is equal to the first element of the list, we return true because we found it. Otherwise, we just step down and recursively call down it. We can also define that using patterns. So here we just say if it's x and the element has the list, the second argument is a list, we check if it's the first element in which case we return true. Otherwise we call ourselves recursively, the second clause, otherwise if the last argument is an empty list, we just return false because we hit the end. And one thing with the pattern matching here, for example, any lets and cases, if no pattern matches, we generate an exception. This is the Allang world. We're not scared of generating exceptions because we know our system will be able to handle that and do the right thing, so we don't have to worry about stuff like that. A quick push here, we don't do defensive programming. We put a lot of effort into system design so we can avoid doing defensive programming and avoid handling errors unless we explicitly want to. These cases here, well, Ackerman will always match, but if you recall this member function with something which the second argument is not a list, I'll just get an exception, I'll crash the process and someone else will know what to do. We do that. So I haven't talked much about the Allang sites for it. We have macros, of course. This is a lisp. These are unhygienic in that they can import values from the outside into the macro definitions. They're not, they're sort of half unhygienic. They can import things from the outside world, but they can't export values because it's an expression. If I define a variable I will do inside a let, for example, or a case or whatever, and that is scope. So I can not export values, but I can import values, which means sometimes you have to give things funny names to make sure that names don't clash. Yeah, that's a problem. Unfortunately, we don't do GenSim. Or rather, we could do GenSim, but that will eventually crash the Allang system because every time you create a new atom, that atom's interned in the atom table, and when the atom table's full it crashes. So we avoid interning new atom form. This is not an LFO property itself. This is a property of the Allang virtual machine. So we don't do GenSim. Yeah, I can implement GenSim. It's about three lines long, but we just don't use it. At the moment, atoms, macros are compile time, except in the REPL where you can define local macros. You can define local functions and local macros in the REPL, and we can use those as well, too, for a few moments after I can give some examples. But I add a little feature here, well, a limitation, depending how you want to see it. Yeah, I can define a macro called cons, but it will never be called. There's just a bit of self-preservation to avoid saving yourself from making something really stupid. So you cannot define macros. Well, you can define macros with the same name as the core LFE forms, but I completely happily ignore them. Quite silently ignore them, too. There is some safety in the system for it. So yeah, macros. So we have the back quote, of course. So if you're coming from a common list, you will see the back quote, and it works exactly as you expect a common list back quote to work. So we have a macro add them, and that macro is going to return a form which is plus of the two arguments. And it will just return something which replaces the add that called the add them with a call to plus instead. And we're using the back quote here, which is sort of an extended quote where the common means we're going to take the value of the A and put that in there, and the value of B and put that in there. That's nothing strange there. We can define a macro average, which is slightly nicer here. If we just put the args like this, just have a variable args. So if I find the macro with a list around the arguments, it's saying this is a macro that's going to work on two arguments. If I just do args like this, it will work on any list of arguments. And args is going to be bound to a list of all the arguments that macro call. So I can call average with, I can call average 1, 2, 3, 4, 5, 6, 7, and args now are going to be defined to the list 1, 2, 3, 4, 5, 6, 7. So I can define macros that work on any number of arguments. And that will return a function to evaluate the average of all those arguments in the list. And what it will do here, it will return plus with all the arguments. So if I call this average 1, 2, 3, 4, 5, 6, it's going to return plus 1, 2, 3, 4, 5, 6, well, directly in. And it will divide it by the length of the list. So I've accessed this list of arguments, so I can just divide that by the length of the list. So if I call average abcde, it will return slash divide plus abcde, divide that by 5 for the expression 4, because I can work out the number of arguments here at compile time. So I can do that, which is nice. This is the list star is a typical common, it's a common list macro. It just takes elements and builds a new list, except the last element is the tail of the list. So if I do list star abcde, that will return the list ab and c will be the rest of the list. I can define that as recursive macro like this. If I define clauses, then the first argument of each clause here, in this case, say, for example, list d says, if this is the list, the whole list of arguments. Again, so list star will work with any number of arguments. And if it's a list of one element, we'll take the first clause. If it's a con, saying we have one element and the rest of the list, we'll take the second clause and we're going to evaluate ourselves recursively. And if it's an empty list, we'll just return the empty list. And if it's none of these, it will crash at compile time and generate an error. So yeah, so we can macro any number of arguments. I can only have one macro defined per name. So if I define a macro footh, there's only one macro called footh. That's the limitation for it. I can have multiple clauses like functions. And yes, we have the back quote. Back quote is very nice. Back quote says you're writing down lots of code to build list structures. I'll just write down the list structure. And it generates the code to build that list structure for me, which is very nice for me. You can do lots of things with that. Yeah, there was something more. No? Yeah. Okay. So what I plan to do now, again, call it the end here. This is just an example of LFE code. It's a very simple code. It's actually code. So when we're working in the lab, we had our own small exchange to play with and we wrote a number of operating systems so we could make calls in our exchange. It was only a small one. We had four telephones. So we could make telephones ring and test things. Which is very illuminating, trying your stuff against real hardware. Quite a lot of strange errors. So this is an example. So we defined our own thing. We call it teleOS here. If you ran on the hardware to make the phones ring, you could also run it against the user interface and see things, see graphically see things. But these two functions, they're both examples of LFE code and also very Alanghi type code. You often see in these type of systems here. So what we've done here is we're making a telephone call. A is calling B. And we've hit the stage where A has entered all the digits and the phone is ringing. So we're in the ringing state. So on the left hand is the A side which is ringing. That's the person with the receiver up and hearing the ring tone in the receiver. And the B side is the phone that's actually ringing. So what we do here is we have two processes, one doing the A side and one doing the B side. And they're talking with each phone and they're talking with each other. So again, we're not scared of having processes here. Yes, we could combine these to one process. It wouldn't be much more complex, but doing it this way makes it much more usable for doing other things. Much easier to code each one. So what happens here? So if we go into the ringing A side, it's got three arguments. It's got my address. It's got the process identifier of the B side process. So I can talk with the B side process and it's got the address of the B side. The address here is the address in the switch itself. So I can tell the switch to do things using these addresses. And we go into receiving, we sit and wait for a message. And if we get non-hooked message in the A side, that means the A side's given up and put the receiver down. When that happens, the hardware sends us a signal which we could convert and send a non-hooked message to this process. So all these things, interaction with the outside world, it's all asynchronous, everything like this. So something happens on the outside world that results in messages being sent into the system. In this case, the on-hooked will send to this process. And when it receives an on-hooked process, it's going to send, to the B side, it's going to send the cleared signal. That's what the second line does. The bang is the send operator. This sends a message. So it sends to the B PID, it sends cleared. Then it calls teleOS to stop the tone ringing in that phone. That's what the top tone does. And then it goes to the idle state by calling the idle state function. So that's what happens when the A side gives up and puts the phone down. Now on the B side, it's going to receive a cleared. And that means it's going to stop the ringing on the B side and it's going to go to the idle state. So that's when the person gives up. Now the other thing, of course, that can happen is that the A side, the B side, sorry, answers. So again, they're off signal into the B side code there. And it stops the ringing. So it stops the ringing in the B side. And it sends a message to the A side saying, yeah, we've got an answered here. And then it goes to the speech state by calling the speech state function. Now on the A side, it's going to get an answered message from the B side. And that stops the tone in the phone and it tells the switch to connect these two phones together. So typically in this case, it would be the A side that talks the switch and these types of things. This tells connect the A and the B side together and now they're talking to each other and then A side also goes to speech state by calling the speech state function. Now what can happen here in both of these cases, someone could try and ring us, call the A side or the B side. Now we have to answer that immediately. So we can't block here. Again this is why everything here is very asynchronous. We cannot block. So this is what the seize message is. This is someone trying to call us, seize us. This is what will happen when A calls B. It will send a seize message but B will be an idle and can go to ring the phone. And what we do is in both sides here, we send back rejected. Because we're busy. We do that. And then we keep on in the same state. So if anyone tries to call us, everything is asynchronous. We just answer immediately and send it back for it. Nothing here blocks. And if we get any other message which is the underscore, that just matches anything. We just ignore it, throw it away and just stay in the same state. So this is some simple list code using the pattern matching the receives and it's a very typical type ALONG system code because it's all very asynchronous. So for example, whenever we're doing synchronous stuff in the machine, it's actually two messages. If I do a synchronous request, I send a request off and I sit and wait for reply message. And they're both asynchronous. I can do other things while I'm waiting for it. So it's an example of code. Yeah, we're ending near and near the same end here. Some footwork going on. I've been extending macros. So how macros work now is you define macros in a file and just include that file in when you're compiling and then all those macro definitions will be available in the file. So what I've done now is that you can actually call, you can make things look like function calls to other modules but they actually happen to be macro calls. So you're sort of removing this distinction between macros being local things and macros existing in other modules but all looking like function calls. It's compile time. There is a runtime hack for them which is a bit ugly but it's quite cool. This means I can get rid of downloading, including macro definition files for various things. I can just call the macros in another module. The module has to be defined. You get some strange cases. I can show afterwards. It means that you get things that look like function calls to other modules in a pattern. They just happen to be macro expansions, right? Calls to macros. It looks quite funny but it's perfectly logical. It also means I can actually quite easily make things that look like functions with variable number of arguments. I'll call a function in another module that can have 1, 2, 3, 5, 10, 15 arguments. It just happens to be a macro. I don't see the difference it called time for. It works quite well. I like the LISP machine. They did a lot of very cool things back in those days for the LISP machines. After they became technically old, just unnecessary. Having specialized hardware for doing specialized type of language just died when the processor became better and cheaper. But they had a few cool things. They had something they called flavors. This is the MIT LISP machines. They had something called flavors which was an object package on top but didn't use classes. It had another way of incorporating other behavior for them through flavors. It was much more, you didn't include a class. You included another flavor which gave you a set of features. You include multiple features and things like this for it. Like cooking, right? I'm making a sauce and I'll include a bit of this and I'll include a bit of that and I'll include a bit of this and I'll get something at the end for it. Very close to it. Object orientation things don't really map that well onto the L-Lang system. There are problems doing it. This map is quite, well, it's not bad. It'll work anything. It's fun. So I mentioned before we can talk to the JVM. There is a baked in simple Java interface in part of the L-Lang system. You can talk to running Java on the JVM. We're doing some work on improving that to talk better interface to closure. And then the LISP machine structs, they had a package for defining structures. You could say, well, how many elements where they were, what type of structure you wanted, a lot of other features for that as well too. So I'm working on that as well to do that thing. And that will, that will subsume two things, two ways of trying to get around the problem that you cannot find user data types. So our L-Lang is something called records which is just a tag topple and Alexia has something called structs which is just a tag map. So with the struct package we'll be able to define both those and wrap them up. So that's about it. And the final question is why? Why, why, why of course? Well, I like LISP. LISP was the first proper high level language I learned after Fortran and Pascal. So that's why of course. I like our L-Lang. I like very much the way the L-Lang system used the L-Lang system to define systems for it. It's very versatile, very refreshing. It's a big rethink. And I like to implement languages. Implementing languages is fun. Seriously, it is. So that of course is so implementing LFE is a pretty natural form. LFE is definitely past the hobby stage. There are products, systems, serious systems based using it. So it's past the hobby stage for it and the support for it. And I'm definitely not the only person doing that. We're working with it now. So it is that level for it. So yeah, there is me. You can get me. The first one is sort of my Robert the Programme and mailing address. The second one is Robert, our L-Lang Solutions, the mailing address. I'm on Twitter and there's a lot of accesses here to various things on LFE. The LFE.io page is the one homepage for it. I did not do that. I'm not part of the, I'm not the person behind the artistic side of everything like that. There's a guy called Duncan McGregor for that. We have a group, a Google group. We even have a Slack team these days. And there are some IRC tags as well. And I just want to end up with one thing. Yes, there's an L-Lang user conference in September in Stockholm. It's the biggest L-Lang conference in the world. Please come if you're interested for it. I'll probably be there as well. And that's it. Okay. I'm only one minute late, which is almost a record. Questions if you have? Have you thought about ML on the early team? Yeah. Ask a girl. People come and say, come and hint at various languages, right? Yeah. So ML, Haskell, F-Sharp. I think you probably do quite a decent F-Sharp implementation on top of the L-Lang machine. It wouldn't be the same as the.NET F-Sharp. Well, the language might be the same. The interface to the outside world, of course, will be very different. It's time. Literally. That stops me. You said you could have the same language. So that would be the first on the early VM that would have a strong type system? Yeah. And that all behave? Yeah. Because, well, yes. So yeah, the language is running on the L-Lang system by default. Well, the L-Lang don't have a strong type system. They're all statically typed. Sorry, dynamically typed. They're not statically typed. Yeah. You could do that for F-Sharp if you write a static type system in itself in the compiler. You could do that. You could statically type the F-Sharp thing. The F-Sharp side of it. You could define types. You could define types for the interfaces, things like this. And you could do static type checking on the F-Sharp side in the compiler for it. You could not get guarantees. I mean, you can get guarantees inside the F-Sharp part of it. And you could say, yeah, these outside functions, these outside modules have these types. But there's no guarantee someone doesn't go change those afterwards. So you can get a certain level of guarantee for it. Yes, it would be perfectly done. And the more of the F-Sharp compiler is written in F-Sharp, the better, it would be easier to do it. Yeah, probably working out the interface between them. Yeah, it's time. So that's the thing. That's the topic. Yeah. You are saying that there are production systems using F-E. Yes. Why do they pick the language because they enjoy this? Or are there particular reasons why this would be better fit than all of them? I would say why choose LFE? Yeah, you like this. That's really it. The thing is, all these languages running on the virtual machine, they all basically the bottom level, they're all the same. However you want to look at it. I mean, that is what the virtual machine, that is what the Beam supports. It's designed to run our language and it supports our language. And you can put other skins on top. You can have the Alling skin, you can have the LFE skin, you can have the LXS skin, but at the bottom level, they're all the same. And that's why they can interact. That's why I can pick the language I feel most comfortable with, which I like, and I can use that. And at the same time, I know I can get something else, someone else has written it in another language because the bottom level is all the same. That's the feature of it. That's the benefit. And I get all the benefits of the Alling system. So yeah, yeah, it's because you like the parentheses. Any other questions? I can't. I've got some stickers for those who are interested, by the way. Yes, yes. Yeah, I've saved some fuses. Okay, thank you.
Why yet another lisp? This talk will look at some of the basic properties of the Erlang VM and OTP and how they affect implementing languages on it. It will show what makes existings lisps, even Clojure, a bad fit. LFE (Lisp Flvaoured Erlang) has been designed to run efficiently on the Erlang VM and at the same time be a "real lisp" providing Lisp's good bits. We will describe the language and how it uses the properties of the system to run efficiently and interact seamlessly with other languages running in the Erlang environment.
10.5446/51732 (DOI)
Yep, I think I'll just get started. Morning. You are the people that did not get drunk last night. I managed to get out of bed this morning. My name's Martin Hinchelwood. I'm a couple of things. I'm a Visual Studio ALM MVP, so I do a lot of consulting around TFS, VSTS, around the Microsoft Stack and DevOps. But I'm also a scrum trainer and coach as well for scrum teams. And I think that is a very unique perspective. Most of the coaches don't like the tools guys, and most of the tools guys don't seem to like the coaches. I try and sit a little bit in the middle, although my background is a lot of tools, so that can be a lot of fun. And one of the things that I've found is missing from a lot of organizations' stack is load testing. It's very missing. I had a conversation with a gentleman earlier about how much load testing they do at their organization. And yeah, I travel around a bit. I live in a little place called Scotland, in a town called Glasgow, which is a pretty big town, but it's a pretty small country, and we are kind of stapled to another thing down here, which I tend to not like to talk about. But I travel around a lot. I travel around an awful lot. I have customers in a lot of different places, which means a lot of travel. And I also, I come to Norway quite a lot. I have worked with Program Utvikling, and I do some scrum training with them, professional scrum master. We've got one in a couple of weeks, so if you want to come along, that would be awesome. Not free, but you can come along. And speaking of scrum training, there's a thing that a lot of organizations have been moving towards. Anybody know what it's called? Kanban, yes. But a lot of organizations have been moving towards this thing called scrum. So scrum diagram. Anybody seen the Nexus diagram? For, you know, I get lots of teams working on scrum. It's a little bit more complicated. Yeah? But it's still fairly straightforward. What, what? This scrum is this unicorn that's going to ride in and save the day and secure our enemies, which is waterfall. Is that, anybody here implementing scrum right now? Scrum teams, a little bit. Do you have unicorns? Or do you have something that looks more like this? Because most organizations I go into have something that more, more, more kind of looks like this. It's pretty hard. And one of the reasons that we get, we get kind of stuck is we want this. We want professional scrum teams. That's the important thing. But we end up with mechanical scrum teams. I like to use the word mechanical. The term scrum that argues a lot is amateur scrum teams. We're just kind of playing the game. We're not really serious about it. Ken likes to use the term flaccid scrum teams, which I think is, you know, you don't want to be that. And there's two things missing from our implementations of scrum. And the most, almost every implementation is missing either one of these two or both of them. I'm only going to talk a little bit about one of them. But the first one is the values and principles. If you've read the Agil Manifesto and you understand the background why a lot of the things are there, then you'll know what I mean. There's a lot of things missing. And following the values and principles gets you a long way towards better deployments, better software. But we also need technical excellence. We can't get there without technical excellence, no matter how awesome our values and principles are, no matter what process we follow, whether it's scrum or Kanban or Waterfall or whatever else is out there. If we don't have technical excellence, we're going to falter at some point because we can't deliver well. And there's lots of areas that we need to focus on. If we want to achieve our goals of delivering awesome software with technical excellence, I'm sure you guys do stuff around some of these areas. Hopefully, you've switched from monitoring project progress, which is what we've traditionally done to monitoring the flow of value going to your customer because you're more likely to deliver with the customer once if you go there. Who's actively managing their technical debt? Anybody here? And we're using SonarCube? No? Go look at SonarCube. It's a really good way of at least having metrics around your technical debt. I'm not saying it's going to solve your problems because tools don't solve problems. They just help a little bit. So there's a lot of things we want to do. To have our backlog, we're fine. We need learnings around that. We need to understand what it is we're doing. And there's a new term. We had the scrum term, and now there's a new term around. You've probably heard it a couple of times. There's a couple of talks on DevOps. I know anybody go to Adam Colgan's session. He talked a little bit about the history of DevOps. And DevOps is the new unicorn in the room. Anybody familiar with this particular unicorn? I'm not seeing him on Facebook recently. This unicorn is so awesome. He poops ice cream. This is the DevOps unicorn. And we've got to eat it. We're eating this. And not only do we have to eat it, but we're going to share it with all of our team. Get everybody doing this DevOps thing. There are a lot of practices, a lot of practices that are in the world of DevOps that help us get better, get faster at delivering software. I had these categories up. There's a lot of practices in each of those categories. This is by no means all of the practices, but these are some of the things that you might want to be looking at. And of particular interest for me just now is automated testing, which you'll notice is all over the place. We need automated testing to help us understand our flow of customer value. We need automated testing to manage our technical debt. Yeah? And somebody mentioned earlier, yes, testing in production. Where do you do a lot of your testing? Most people do a lot of their testing in production. You might do some unit testing, but where do you do the rest of your testing? A lot of the times, UAT is just another way of doing testing in production. So we need some way of gathering evidence in production and feeding that back into our cycle as part of those feedback loops. If we want to have a production first mindset and we want to look at how our system is working in production to get more of that feedback in, we need to be doing some kind of APM, production performance management. Do you guys have tools at the moment for that? Anybody got system center installed on their network? That can help you monitor. I'm not a big fan of system center, but it is, I almost went off the edge there. Did you see that? It is one of those tools out there. There are other tools. A couple, I might mention a little bit, but as well as monitoring our application in production in order to help us test in production, because if we're testing, how do we know something's going wrong? Well, we need to monitor our software, understand how it's working. We also need to be able to get the problems that we find fixed and into production as quickly as possible. So we probably have some sort of DevOps pipeline, some automated tools to get us from A to B. Hopefully, you have a completely zero touch automated process to get from a developer check-in to deployed code. That's an ideal world. Lots of times you have that last takeover from we're actually going to production is somebody has to go push a button and say, I approve. But many organizations are moving towards a more continuous delivery approach. So we need testing, but what is testing? There's lots of different types of testing, and we need to focus on the particular areas that we need. So I'm interested in what testing do you do? The first type of testing is exploratory testing. Anybody here do exploratory testing? Yes. Do you call it exploratory testing? No, you just call it opening the application and clicking around. Yep. So I'm pretty sure all of you guys do that. The smoke test you do when you've deployed your software just to make sure the happy path works. We open the application, we click around. That's the type of exploratory testing. That's the type of exploratory testing. Those other ones in there, whether it's user acceptance testing, alpha beta testing, we're getting people to actually use it. It's all manual. It's all manual testing. You can't really automate a lot of that, but you can take the learnings from it and use automation to simulate some of the things that those folks do. Awesome. So some of you are doing it officially. Some of you are just opening the application and clicking about. Because you know nothing tests a piece of software like F5 and then run through the debugger. The next is more developer focused tests. Hopefully all of you guys have good reasonable coverage. I'm not going to say high coverage because it depends on the state of your application, whether high is possible. I'll be good developer test, unit component test. These are all automated. And they're very specifically technology facing, but what does a unit test tell you? Do you know what a unit test is for? The code that you write does at least what you wanted it to do. You as a developer writing code, all a unit test tells you is that it works like you intended it to work. Yeah? It doesn't tell you whether you've met the customer requirement. It doesn't tell you whether it does something the way it's supposed to work, but it works the way you intended it to work. Which if that's a minimum bar, that's a pretty awesome place to be as well. We need unit tests. We might add some more functional tests. Now I always have automated and manual in here because some of them do end up being manual. If you have a bunch of manual testers in your organization, that can affect that. But whether you've got user story tests, you've got prototypes, you do simulations of your software, depends what type of software you're building, what you can do there. But there's a lot of tests there that are again team focused, but more towards the business, understanding that you've met the business requirements. So we've made sure that the code does what we want it to do as coders. We've kind of made sure that it does what the customer wants it to do. And then maybe we've done a little bit more wider testing. I don't know, anybody here on the Windows 10 Insider program? I'm on the Windows 10 Insider program. I've got a new build pending that I decided not to install last night just in case. But they do a lot of user acceptance testing in the Insider program. They send out new builds on a weekly basis. I get builds roughly weekly, sometimes not because a build kind of sucks and they don't ship it. I get builds kind of weekly and I'm the exploratory tester. I'm the person using that beta version for my day job. And when I find bugs, they then collect telemetry or I have a bug report depending how bad it is and they try and understand what those problems are and fix it. So a lot of organizations are doing that, but what's missing? There's actually quite a few things missing, but I kind of put them in one box. Performance and load testing. You need to know that your application is going to work even when it scales up. I add a lot of LTE testing in there, so it's security, usability. There can be a lot of things in there, but performance and load testing is the most important thing you can go automate if you already have some of these other things. And it's much easier than you think it is because traditionally load testing has been hard. Anybody tried to set up load testing before? Yeah. How much fun was it? How much fun was it when you told infrastructure what you wanted and then they realized what you actually wanted to do? They start to have a fit about the number of calls you're going to make across the network and how you're going to affect performance for everybody else that's using the system and all kinds of things. And then you ask for the five servers in five different locations around the world and they just follow up their chair and there's no way they can deliver that and then you've got a problem. It's so much hassle to do load testing that most people don't bother. I used to work, you maybe don't know, he's speaking here today, but I used to work with this gentleman quite a lot. And he has a team of software developers, like any team of software developers, but they're kind of a little bit cowboy and they're not necessarily doing all the testing that they should be testing. Now, I know of specific examples of applications that my customers and people I've worked with that have had problems, but I wanted to be a little bit less, I don't know, weird. And I came up, just googled around, what famous ones have you heard of? Oh, see, I've got it here. I don't have it up there. There we go. And remember this debacle, when Obamacare launched, healthcare.gov, nobody could use it. It didn't even support 10 simultaneous users. It didn't work at all. Built by a team that went away for a year and a half, built the application, came back after a year and a half and went, ta-da, there you go. Didn't work. Completely useless, not fit for purpose. It had all the functionality in it that they wanted, but if it doesn't work, completely useless. And we remember when Instagram launched. Instagram launched and immediately dropped, gone. What happened? Load testing. They couldn't handle the number of users that were coming on. They didn't do any load testing. There's, I get told about this one. Yes. What's the name of the, everybody got logged in as Eric? Was everybody Eric or something? Caching issue. But they had massive load problems that the site didn't operate. Can you think of any other famous instances? It's actually a really good one. It's one of my, one of my favorite ones. And it's, it was actually the, the Obama versus the other guy who nobody can remember first time around. And Obama in sourced his software development because you need a lot of software development in order to win an election in the US because you have lots of boots on grounds, on the ground, doing canvassing in areas, doing straw polls. So they go to the poll stations and they're maybe taking guesses about who people are, in America it's kind of obvious who people are voting for when they walk up to the polling station. You know, if they're armed, they're Republican, if they're not armed, they're probably going to vote Democrat. And they, they, they fill out on an application running on their phone, who's coming to the, who's coming, who, where the polls are likely to sit. Yeah. It's a guess, but it's a good guess. So if there's lots of your party coming out at a particular polling station, then you want to move all the people that are canvassing in that area, knocking on doors, getting people out to vote over to another area where it's lower. Yeah. If you can do that strategically and effectively enough, you can get more of your voters out versus the other guy. And the other guy's app failed. They couldn't handle the load for the number of people using the app and filling out. They actually, they couldn't even log in. The login system didn't even handle the load of being able to authenticate all of the users, let alone once they got that fixed, then the application was a complete nightmare. Now, the two ways to solve that is Obama had scrum teams working in-house, building the software, so they delivered it intuitively, so they tested it by the time it launched. Yeah. So there's the other guy's outsourced to India and got back something after a year that didn't work because they got it on lunch day. Literally, they got it the day before. It's not going to work. So don't leave load testing till last. If you leave load testing till last, the only thing you'll find out is that your software doesn't work and you'll find it out last. And now you've got to go figure out why it doesn't work, why it sucks. And usually, in most cases, I've seen it requires almost a complete rewrite of the back end systems to go fix a lot of those issues. Sometimes you can get away with it with caching. Sometimes you can get away with it with brute force. Let's throw three, let's spin up 30,000 servers in Azure. I believe one of the sites up there that I mentioned, I've seen one before in Australia and it was four simultaneous users per web server that they could support with their application and they needed to support about 100,000 simultaneous users. So that didn't work well. What do you think, how long do you think it's going to take to go do the engineering and figure out how to get? Even if we get 10% here, 20% there, 30% somewhere else, that's going to take a long time to go figure out where those problems are and fix it. So I want to show you why there is absolutely no excuse to not load test your applications. So let's see if I can actually get this working. So I just connected to our virtual machine. I'm using my virtual machines in Azure mainly because I'm running preview software on my desktop and I can't always get Visual Studio to work. But I actually don't need Visual Studio. Anybody here set up cloud-based load testing with Azure? Nope, awesome. Let me show you how easy this is. Where am I going to go? I see these menus. Hopefully that's back. Let me see. I hope he needs. Let me go to... So I have a Visual Studio team services account here. This is where the load testing lives. You don't need to be using the STS in order to use load testing. You just need an account. How many people here have MSDN? You all have an account for this. It's free. So you get five users for free to come in. And there's something pretty interesting right here. Load testing, virtual user minutes. You get 20,000 virtual user minutes out of the box every month for load testing. No charge. That's what you get out of the box. So I have this little tab that says load test. And what say we do a little bit of a load test against the NDC website? So I'm going to create a new URL-based load test. I can get more complicated than that if I want. You can see I have quite a few options there. So I can have multiple scenarios. I can add URLs. I'm going to go call in a stack in a scenario. And I can add additional headers and query strings. And we can pass stuff around. And you can do a little bit of variable stuff. So I'm going to say NDC Oslo basic test. I'm going to save that. Pretty simple. And since I've saved it, I'm going to go run it. I just created a load test. And it, well, it's not running quite yet. It's in a queue to go run. But if I pop back over here and click edit, let's see what we're going to go get. I go over to settings. I wanted to set it running because it takes a little while to spin up and then it takes a little while to run. I can control the duration. So I'm doing a two minute test with a constant load pattern of 25 simultaneous users. It isn't that much. But I don't want to use all of my minutes. But I can do a step based model as well. So we want to ramp up the users. There's two types of load tests. There's I want to test to make sure that my system works as I expect it to work. Like I know I have to support 25 simultaneous users. If I hit with 25 simultaneous users for a minute, everything should be okay. Yeah. And then you've got, I want to know how many I can support. I want to hit my server and ramp up the load until my server fails so I can identify, one, how many users can I support right now? So and you can tally that against your peak load as well as where am I going to start seeing performance degradation first in my system. If I jump back to the run, let's see if it's running yet. It's running. Get nice little graphs. What's going on with the load? It looks like it's working fairly well. We seeing any errors yet? Nope. Failed requests. User load is constant, no failed requests. That's what I would expect from a fairly straightforward web application, especially like NDC. I'm only hitting it with 25 simultaneous users. There's 2,000 people at the conference, all possibly looking up the agenda on their phone. So 25 simultaneous users is probably not unusual, although maybe Jacob's having a heart attack right now because he's getting alerts from where every server is stored. But I get a nice little graph and no errors. That's pretty good. At the end, it will tell me how many my usage. It will collate all the data and I can continue to go in here, look it up, see the trends over time. I can run it a bunch and figure that out. Does that look good? Did that look easy? Yes. Okay. So anybody here use Fiddler? Okay. So how about I try a different way of creating this load test? Let me just open. Here's one I made earlier. I just import it, an archive. I actually didn't save it out of Fiddler. Did I save it out of Fiddler or did I save it out of Edge? I can't remember, but you just HTTP archive, then you get all of the different headers all set up. It will do all of the query strings. So here I have one web scenario where I connected to NDC Oslo. It probably hits a couple of, I'm not sure why it hits those URLs, but I went to the agenda, safebrowsing.google, so it's hitting something there behind the scenes. So you're now simulating a more real world hit. And then I went to my talk and I can just save that as a load test and run it and it will run just the same. We're not parameterizing anything. We're not doing anything scary. We're just running a load test. Can you guys handle that? So what are you going to do on Monday morning when you go back to the office? Break the system. Set up a load test, see where it fails, see how quickly it fails. It's free. And if every developer wanted in your organization with an MSDN, they can go and set up their own VSTS account and they all get 20,000 minutes. Now that obviously would be difficult to handle, but you can do that if you're just doing it on the fly. If you can't get your organization to buy tools, to buy capability to do load testing, or they won't buy stuff from particular vendors, then it's free, cheap, easy, easy to set up, easy peasy. The other way to set up a load test, anybody done load test through Visual Studio? No. My goodness me. Okay. So let me open up Visual Studio. I have a web project here and I have the standard, oh, that's going to take a while. No, I have the patch. See, it's finished already. Anybody that's using Visual Studio 2015 will know that some of those boxes come up for a really long time. Fixed in the next release. Yes. So I think the RCS just released, hasn't it? Yeah. Yeah. All that stuff's fixed. The METH thing that takes ages, the dial it all, all that stuff's fixed. So I've got Books, which is just a simple web API app. All it is is a web API. There's nothing else in there. Books.tests, couple of unit tests. They are totally fake unit tests because it's just demo code. But then I have this web tests. And I'm going to right click and show you how to create a load test. So I've got two things I can create. I can create a web performance test and I can create a load test. I don't know where my load test is going. What first, for Azure? That is a very good point and what I should have mentioned. You need enterprise. Load testing is considered a thing, a enterprise feature. So you need the enterprise tools to be able to go do that. Same as on the STS, as long as you get an MSTN enterprise, it will just all light up. But you can just go pay for it regardless of what tools you've got. You know you get a ton of stuff through your MSTN for free stuff? Yeah. So you get Azure Minutes, you get servers, you get all kinds of stuff. It's kind of just part of that bundle. So the bigger the bundle, the more expensive the version you buy, the bigger the bundle. Which is pretty good. So I can create a web performance test. Awesome. And if you're using this horrible browser, this evil, evil browser, you can see you'll have a Microsoft web test recorder plug-in. When you install Visual Studio, the first time you open IE, the thing pops up. You want to run it. Usually you say, no, I don't want to run that because that's going to do stuff. You need to have that enabled, and we've got the web test recorder as well, and we can enable that. But that's not actually what I was looking for. I was looking for the other option. Why can't I find that? There's load test. Okay. Here's a web test I created earlier. So I just created a blank web test, and I can add requests, add loops, add whatever I want to simulate, how I want to hit my server. Again, with this, you can import a Fiddler trace as well, which is useful for getting the full stack, but if you just want to see, I want to see for hitting particular APIs. If you're building a bunch of APIs, you want to hit it in a particular order. You want to set all of that up. So I can create this here. There's a way to parameterize the web servers. So once you've added the items in, this is a static load test. I can then just parameterize it, and it will set up this little context parameters piece where I get property that I can just go set. And here I'm hitting an Azure, I deployed Azure website, and then I can actually just run this test inside a Visual Studio. So this is not using any of my load test minutes. This is just using Visual Studio, and this is just a web test. It's not a load test yet. I'm just running it in Visual Studio, and it's gone. Just made that one call to that API. It's come back with an OK, and we're done. Pretty straightforward. But then I can go in and add a load test. When I add a load test, I can use my cloud-based load test, which is the same as what we were doing in the cloud. This will appear in exactly the same place with all of the same setup, but we get a lot more features and control here. Or I can do an on-premise load test. So I don't need anything else. If I just want to sit at my desk, not use any cloud infrastructure whatsoever, and hit my servers with a load test, I can do that. So let me just do that right now. I'm going to create, let's not do five minutes. Let's do a one-minute load test, and I am going to add whatever tests I like in there. So if Visual Studio lists the tests, if Visual Studio understands that those things are tests, you can go add them to this mix. So yes, you can simulate load tests with unit tests. If you have, if you've written some unit tests that hit your system in the way that you want to have those things hit, you can use unit tests in here as well. I did that with an organization that was building a business intelligence tool, and we wanted to hit the server with a bunch of MDX queries. There's not a tool for that. So you write a bunch of unit tests that run the MDX queries, and then load test it with this. So I want to take my basic books, add it in there. So I'm just going to add one right now, but I could add a bunch of different ones. I can set up a different LAN, so it will actually simulate different network contentions for those. We can set up different browsers and have a mix of browsers, what percentage? So it's basically going to pass the query string, the user agent string for that browser, so that if you're doing something special on the other end because of that browser, you'll catch those problems. And then here's where the fun starts. If you're running load testing locally, you're going to have to have access to the servers that you're hitting if you want to understand how your load test is impacting those servers. So if you have admin rights to those servers, you can go tell this to hook on to the performance counters, just the standard Windows performance counters, and pull the data from those performance counters into your central server. Now, if you're just running in Visual Studio like I'm going to do, so this is all, my desktop is the only thing in the mix, then you can really, I'm really only going to capture locally. My website I'm hitting is an Azure website, so I don't have access to the server telemetry. If you're local on an internal network and you don't have access to the internet, you can use this tool to go hit your server. You can add performance counters for your servers that you're hitting. As long as you get admin rights, you can go collect that information. You can set, again, you can see all of the settings, kind of similar to the stuff we had in the web. Just the performance countersets, and we can go add different machines, add different collectors in there, and then we can just run that. That's going to run, Visual Studio is going to orchestrate that execution of the test. So I probably have a max number of calls that I can make through my pipe to those servers. I've only got one machine in the mix. Does that make sense? So it may be a little bit slow to start up, although this is supposed to be a beastly Azure machine. I think there's eight cores and 16 gig RAM and all those kind of things that make it lovely and fast. There we go. Now the load test is running. I've got one error. Let's see. I'm running locally, so it's saying your geo location is not going to work. Fair enough. You can see the zero failed 463, 67 passed, and that number will keep going up. It's making as many calls as I can. So yes, you can go kill your local service, turn on your network guys with this tool. It's pretty awesome. So let's just assume that's going to finish and work so that I can move on to the next part. Well, it's kind of going to work. Pace response time just dropped, which is good. There must be a bunch of caching going on, but what do we have spiking over there? Errors per second. Errors per second just spiked. Oh, maybe that's something we've got to look at. Yeah? So I've maybe just found a problem with my application right there, and then it will come back and give me the final stats. There was over a thousand errors, and the thousand errors are actually because of 500 errors internal server error, but they only started occurring if you look at the graphs. That started about 30 seconds in. So if I'd done a 30 seconds load test, I wouldn't have caught that. We'll look at why that was happening in a minute. So I can set up that load test locally. I can also right click and add a load test and build my cloud based load test here in Visual Studio doing it exactly the same way as I just did it for the local one. I'm using my account because that's what I'm connected to right now. And then you set up the scenarios just as you did before, but you get to pick which data center. I'll take a sec. Boom. There we go. Which data center do you want to run the load test in so you might want to simulate calls coming in from the US to your environment and see what the latency is like for that. Or Australia because the latency there is terrible, beyond ridiculous. Most fastest internet connection you can get there is about 20 megabit in case you were interested. Or 20 megabit does not compute. Can you even fit a movie down that? The answer is no. So you can set up some pretty cool stuff there. Now I have this solution all checked in to VSTS. So I'm going to show you a couple of things once we get onto the next section. But did that make sense? Is there any questions around the setting up simple load tests, the Visual Studio local load tests and the cloud load tests? No? Cool. We will come back to that. So surely this is expensive. We get some time for free, yeah? But surely it's expensive. The answer is no, not really. Not in the grand scheme of things. It's pretty cheap. I have, I did a couple of calculations. I asked somebody in the speaker room what they were running and how many users they support. And they supported 200 simultaneous users for the application. They went and looked up their Google Analytics for their app. 200 simultaneous users. If I want to hit 200 simultaneous users for two minutes and I'm doing 10 deployments a day, they're doing 10 production deployments a day. So I need to do 200 times two minutes times 10. It's only 4,000 virtual user minutes, which is free. I get that free per month. Before it costs any money, you can do 50 deployments at that number or you could ramp it up to 1,000 simultaneous and still not pay anything for that number of builds per day. Not after that there is cost, but as you can see by the numbers, they're ridiculously small. And the reason they're ridiculously small is that they want people to use it, one, and it's really cheap for them to spin up servers. Spinning up servers in Azure is ridiculously easy. You could build this infrastructure yourself to go do that, but it certainly wouldn't cost you this because it only runs. You only pay for the virtual user minutes that are hitting your servers. I think I worked today, I've got a customer that did, I think they spent $8,000 for all of their load testing for six months. And that was on a major application with thousands, hundreds of thousands of simultaneous users. So it's not cost prohibitive. If you look at the cost of how many servers you would need to support that many simultaneous users, you get about 100 stable 200 max simultaneous users per server if you want to use your own infrastructure to do it before you start having degradation. Microsoft do support. Anybody here can't use cloud load testing? Awesome. If you have colleagues that can't use cloud load testing, Microsoft do have infrastructure to install locally with agents that you put on servers all over the place. But obviously that's going to cost a lot of money because you need servers. Okay, so it's cheap. It's cheap, it's super easy, so why are you not doing it? How many people here are doing load testing right now? One. Everybody should be doing it. It's so cheap and so easy that you should be doing it and do it regularly. And the way you do it regularly, we have professional DevOps, not just amateur DevOps. Most people are doing amateur DevOps, same as they're doing amateur scrum. So we want to bring that into our continuous delivery pipeline. We want to have, I don't know if you've got something like this set up right now, but I want to have an automated, as soon as somebody checks in, I have an automated build. Hopefully you've all got that. As soon as somebody checks in, we've got an automated build. Awesome. If a unit test fail, well, we go back to the drawing board and we go fix that problem. But if they pass, what happens next? Do you just leave that build there and not do anything with it, or do you deploy it to an environment? Yeah? Maybe deploy it to some sort of stage one. We're going to get some feedback. So I'm going to deploy it to an environment and then I'm going to do a bunch of automated acceptance testing. I maybe use spec flow. I maybe use Selenium. Whatever you guys are using to do your automated UI testing, coded UI tests you might run in there. If you were really crazy, you might run a coded UI test. If that fails again, back to the drawing board. But what if all of that stuff passes? You want to be doing performance testing. So have another stage where you run automated performance testing. And then if everything is good, you can release. And maybe that's approval. I've got a little approval there for you guys because I thought that would be the case. So we want to move through and we've got, here I've got two automated triggers and an approval, but you could have an approval at each stage. Because obviously, automated load testing is going to cost you a little bit of money. Yeah? It might cost you your free money, but it's still costing you the free money that you could use for something else. So you might want approvals in there. So how do we integrate this into our DevOps pipeline? How many people here are using TFS? Anybody? Two. VSTS? Cloud-based TFS? Okay. So in TFS, and you can use any of the DevOps pipeline tools, Octopus, I had two else who is good in that space. There's a couple of guys. I think there's some stands downstairs with different guys in there. So let me drop, there's a studio. Pop-up in this. So I've got my load tests and I have, oh, I'm going to have to refresh that. I lost the menus last night. There was some kind of failure and all the menus disappeared. So I was trying to work through my demos and poof, everything stopped working anyway. It's always fun, I'm like, send them an email, please make it work tomorrow. And they made it work. So I have a build that comes off that Git repository that has my code in it that I showed a minute ago. So this is fairly straightforward setup and I'm just doing a straightforward compilation and push the bits somewhere. The only clever things I think I'm doing, I'm using Git version, which is an awesome tool if you're not using it, go look up Git version and use that. It does all your semantic versioning for you and you don't actually have to do very much and you can bump it just by checking in code and putting in the comments. It's awesome. The other thing I'm doing is I've got some extra copies. I've got a script that I have to copy into the drop location. So I'm copying my scripts folder because I have one script that I had to copy because I couldn't find a built-in task for it. I'd prefer a built-in task, but I had to use a script. And the other one is I'm pushing the test settings. So Visual Studio creates these test settings that say about where the test bits are. So test settings, if you've got a load test, you might have to pull in extra files. You might have to deploy extra files to run tests. Test settings is about pulling that information together. You might have three scripts that you have to run before you run your tests in order to set something up for your system in test mode or whatever it is that you have to do to do that. I generally don't use test settings, but I need the test settings file in order to do something I'm going to show you in a minute, even though the file is the default configuration. And then I push all of the artifacts up to the drop folder. So I do a build. It pushes the artifacts up to the drop folder and has everything in it I want. And just to show you what that looks like, I'm just going to explore that. I have my application, and this is a standard Azure web app. So I have my zip file for my app, and then I have my two test settings files. And in here somewhere, I've got other stuff that I might care about for testing. So all the assets for doing the executing the unit tests, doing all of that is just part of that drop. I don't actually need it in there, but set up some defaults. So I have this automated build, but it does not deploy my application anywhere. I'm just doing a compilation. But as soon as my compilation completes and it's successful, that's if I queue a new build. We may not wait for that, but it's going to trigger a release. So if I go over to all releases, I have a release definition, and I have multiple environments or stages in my pipeline. And I do have a bug. There's the good, the bad, and the ugly with all sorts of tools. I couldn't figure out something. I'm going to show you what that is, and I've emailed the product team and gone, what the hell is going on with this? So I have three stages to my release pipeline. In this case, they all succeeded, stage one, stage two, and stage three. And if I click on environments, I don't know why they're called environments. Stage one is I'm deploying my application. Okay, so I'm pushing it out to Azure, pushing it out to my account, and I'm pushing it to the staging slot in Azure, because Azure nicely gives us these little staging slots where you can just push bits out to. If you were doing an on-premise or a local app, you might be pushing out to here's my staging server, and then here's my production server over here. You might do it that way. And what am I going to go deploy? So it's just building and deploying that Azure website. The reason I do the demo with an Azure website is it's so ridiculously easy that even if the website doesn't exist or you change the name, it goes and builds a new one for you. So you don't have to go tinker about with IIS and servers and all those kind of things. So it just does it for you. And then I'm running a quick load test. I am just doing a URL based, so I'm not even configuring in the other tool I used. I'm literally just filling it out here. I'm putting in the URL, 25 users, 60 seconds. Pretty straightforward. And I've given it a name of stage one, so we can go find it in the other two. So when that executes, I go to load test, I go to stage one, and see I get a dot load test. It doesn't really exist as a physical file. It's like a little virtual file. And I have my load test that ran 10 hours ago. And you can see I used 250 virtual user minutes, and everything was awesome. Everything went well. Zero errors. 19 requests per second. Average response time, a quarter of a millisecond. And it's okay. I don't have a very big server. I don't have a very big setup. All super easy. And I'm just calling the web API. Makes sense? Super straightforward. Okay. So if that is successful, I move on to stage two. And in stage two, I'm still talking to my staging environment, but I'm doing a bigger load test, because load test costs money. So if a little load test fails, I don't want to go and do a big load test. Yeah, I want to save money. So I don't want to do... That's why I want to break my deployment pipeline up into discrete stages, rather than having it all as one big massive build process. Because sometimes you don't have a choice to exit early if stuff doesn't work. So here, I'm doing a little bit longer load tests. It's the same load test, 120 seconds. So I'm doubling the amount of time. And if we go over to our load testing, and I go to stage two, here's my problem. See this green tick? This green tick, because it's a successful load test. However, 50% failed requests. Why am I getting 50% failed requests? If I go into errors, I'm getting all those 500 errors that we found when we did the local one. Oh, that was the wrong button to click. And if I go to charts, look at that. Exactly one minute in. So our 60-second load test didn't catch this. So our 120-second load test did, we start getting fails. Failed requests start kicking in. And all the requests fail after that period of time. 100% fail after the first half of the load test. So even just doing a load test with the same number of users, with the same load for slightly longer, our application fails. I wouldn't have caught that if I hadn't done a load test. I've been playing with that app for weeks, by the way, and I found that the day before yesterday. I can go to the website. Look, works fine. I can click it really hard, and it still works. But 25 simultaneous users, 25 calls per second, boom, crash, after a minute. So what's actually happening behind the scenes is, remember I mentioned telemetry? Do you remember I mentioned having application telemetry? I actually have application telemetry here. In Azure, I have this little symbol here, which is my application insights. Application insights you can use on your local application as well. It's not tied to anything else. It's just metadata, so it's performance data. So if I click on this, I'll see the, oh, it's loading a bunch of stuff, but I see the calls that were coming in. I'm getting a lot of failed requests, the 6,000 failed requests. If I go over here, oh, my app connects to a blob, and I'm getting 20% failed requests hitting that blob. Basically all my app does is it generates up to five random boot names, ISBNs, that's it, that's all it does, and it used to get the data from a string, hard-coded in the app. So I wanted to change that, so I changed it to a string in a text file uploaded to Azure, and I just call that. But if I call it too many times, at some point it starts failing. So maybe I want to cache that, and then all of my errors will go away. So how do you identify where the problems are? How do you understand that these things are happening? You need telemetry as well, and this tool can be used on-premise, it can be used with desktop apps, it can be used with stuff to play. It's doing some fun detection of stuff that's part of Azure, because it's on Azure, and it can go figure that out for you. But if you're on-premise, you've got your desktop application, you've got web servers deployed locally, you can still collect all of this telemetry. I have customers that deploy their application on-site to customers, and they collect telemetry from their customers, and they know when a deployment or an update has failed before their customers do. They're phoning their customer saying, we're going to have to get you an engineer out because there's something wrong with your application before the customer even realized there's a problem, which is awesome customer service. This is collecting metadata on response time. You can put your own metrics in there as well. I have a bunch of applications in here that do that. And I can marry that up with my load test. So at what point in time did my load test start failing? At what point in time did my application insights kick into error mode and marry all of that up? Does that make sense? So my only problem, this is the ugly part. This goes green tick even though I've got 50% fails. And I haven't been able to figure out a way to get that to go Red Cross because I want it to fail. I want that to say that's bad. So there's two types of, where am I in the release? So if I go edit that release, show you. So I've set up the quick, quick web performance test. I couldn't figure out how to get it to fail. I've asked the Prog team to go fix that. But if I add a different type of task and I add a cloud based load test, it's not quite so simple because I need to not only set up my connection back to TFS, but I have to go load my settings file from in here. Where's my settings file? It was in book, nope, it was in test settings and I have books and then I had a load test, okay. And then my load test files folder. I think that's going to be in there. The load test file, so the dot load test. And then it will go load that physical load test file from my code. And then I've got a number of permissible threshold violations that I can set and say, well, this load test should fail if I want that on the quick load test as well because that makes sense because it's super easy to set up much easier than this, which is why I didn't set that one up for you. But you can do that yourself. All this little task here does is call some PowerShell to go spin up the load test and go do that. If you look at the APIs for that, there are the VSTS documentation site, there's a whole bunch of APIs for spinning up load tests and you can even do more advanced stuff like, I don't want everything to be from one location. I want to catalog my, I want a load test to run and I want to use nine different locations worldwide to hit my servers from and do the load test simultaneously. It will let you do all of that through the API as well so you get full access to do whatever you want. All of this stuff just calls the API. So there's nothing, no hidden magic that they're doing to set that up. So that is pretty awesome. So there's the link. Well I think you'll get the PowerPoints part of the conference. I'll send them to Jacob. So we've got the load testing APIs. So in order to get professional teams, we need two things. We need professional scrum or professional Kanban if you prefer. But we need to be professional about our process and we need to be professional about DevOps as well. That's a really hard place to be. So there are some links in the presentation for how to get all of this set up. If you're Java folks, they also have full support for Java. The new Microsoft, anybody see the news this morning? Microsoft just released their own version of BSD for running on Azure. So Microsoft have just released Linux. However April 1st, there was a Microsoft just released Linux, April Fool. Yeah, it happened anyway. But it's full support for the Jmeter load tests as well. That's the kind of Java, common Java standard. And again you can hit the APIs with whatever the heck you want. And lots of different ways to configure it and set it up. If you're interested in more information around the ideas behind, I mean load testing is just a tool to go use but we need to know how, why and when we need to go use the tool. I obviously am going to recommend this book because it's mine but it's a little bit old for DevOps, the Phoenix project. If you've not read the Phoenix project, you should go read the Phoenix project. It's awesome book. And for the scrum based stuff, go read Ken's, Ken and Jess book software in 30 days. And I have a blog where I post as much information as I can about how much fun it is setting things up and having them fail. I'm sure you guys have blogs as well. It's no fun if everything's awesome. You only really want a blog about all the stuff that doesn't work and how you manage to fix it. And feel free to come up and speak to me. So does anybody have any questions around that stuff? Cloud based load testing? No, I covered it perfectly. I guess the biggest hurdle is getting Visual Studio, correct? Visual Studio Enterprise, yeah. You need Visual Studio Enterprise to get the free capability on the web. The web capability you get anyway. But you only get free minutes with Enterprise. Or you get, I think it is graded. I think you maybe get 5,000 free minutes with a professional and then you get the 20,000 with, I can't remember the exact figures. I know you can. 20,000 with Enterprise? You get 20,000, so that was a state, sorry. Yes, so you do get 20,000 with Professional as well. 20,000 free minutes. You don't have the tools in Visual Studio unless you have Enterprise. But you can do all of the stuff that I showed you on the web, pointing and clicking. And you can import a Fiddler trace. So you can go make that exactly the same thing that Visual Studio does. You just a little bit more manual work to go set it up and configure it. Yes? So the question was, what about if I want to go test other things than web applications? I wasn't testing technically a web application. It was just an API. So most, I'm with you. I build a lot of desktop apps. I don't build a lot of web apps myself, but I have a lot of customers that build both. You find that web apps front ends, you want to go load test. APIs, you want to go load test if they're web based APIs. If you've got everything running on a single machine and you've only got one user anyway, then that's not a load test. You're not worried about load there. What you might do is simulate single user load by creating a load test of a bunch of unit tests and unit tests executed quickly in a particular fashion if you're doing simulations. Yeah? One of my customers here in Norway builds simulators for training people to use ship handling. They have a stand down with a big chair and they do their load as internal to the simulator. So I couldn't quite hear the details. Yeah, absolutely, you can do that. So I created a load test by calling a web test, but you can create a load test by calling it as a unit test inside a visual studio, you can select it as part of the load test framework so you can call it something else. Yeah? So no, they do not. If they show up in the visual studio load test list, i.e. there's an adapter for that framework, then it will work. So if you've got adapters, it's in unit, x unit, jasmine, there's a whole bunch of adapters out there and not all of them are from Microsoft but some of them are, the other people have built as well. Most of it will work and if it doesn't because you've got some custom unit test framework that nobody's built an adapter for, you can build an adapter for it. But as long as it's listed there, then you'll get it. So uninstall resharper and then see whether it's still listed or using a machine without it. But it should work. It should work. Any other questions? Cool. Feel free to come up afterwards if you've got any questions that you didn't want to let everybody else know about. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
The only way to know if your systems can handle the number of users is to load test, however load testing is hard and the infrastructure expensive. Come and see Martin demontrate the tools and techniqies that are required to test your software under load, even in Production.
10.5446/51733 (DOI)
So, hi everyone. Welcome. Thanks for being here at the very last talk and our talk. Are we ready to start? Yeah, I think so. So, I want to start with us. Okay, so thanks for coming to our talk titled Making Complete Big by Going Small. Who are we? So, I'm Pav Neet. I'm a web developer at Komplett. I'm also a squad lead. We'll get into that a little later. And my name is Thomas. I'm a lead software architect on the web team in Komplett. And what are we going to talk about? Feedback? We are going to talk about how we've scaled our team and our group as we've changed. We've gone from a small group to a rather large one. And we've had to make changes both to our architecture and our organization in this process. And that's what we're going to talk about today. So, it's kind of an experience talk. Right. So, Komplett Group. For those of you who aren't familiar with us, we are a group of web stores. We have 16 web shops active now. We have around 7.3 billion crowns revenue. Just under a billion dollars. 1.8 million active customers, 800 employees and around 20 million uniques. We suspect that means unique sessions. In 2015. So, we, these are some of our stores. It may not be that easy to see. But we're traditionally an e-commerce electronic store. But you may be able to see that we've now branched off into travel, insurance, car parts, baby equipment. Amongst other things. And just last week we started the mobile phone company as well. So, things are changing. But we've not always been as big as we were yesterday or today. We started in the mid 90s. And then we had a very simple architecture. So, you, we had an online catalog. And as a customer, you contacted the sales department. So, you actually phoned them. I know it sounds quaint, but that's how it was done. You phoned them and they called out into the office and said everybody get off the system because I have a customer. It was a one user system where we placed orders directly into the system when we had the customer on the line. That didn't scale very well. So, we went on and created what would become our future web platform, Chrome. We called it Chrome in the early 2000s. So, we were before the browser. If anybody mentions Chrome, it's us. It's our name. Okay? At least for this talk. So, the Chrome 2B, it wasn't called Chrome at that point. But it was one of the first e-commerce stores in Europe at least that actually showed you an online stock status. So, you could go to it and see if we had the product in stock. The customer connected it directly and the orders were placed in the order database. Still a quite simple architecture. Then, we got a bit over ourselves and we bought an ERP system, a German three letter acronym. Shout at you. So, now we had to, from this point on, it's actually called Chrome. So, now we had to have a system that kept up better off time than this German system because apparently such systems go down. So, now we had to have a message queue talking to the system. And Chrome placed orders through messages and the ERP system got them from there and populated our database directly. So, that was our main architecture up until, well now really. Pretty recently. So, we're really still here but we have done some changes and we are working to change it. But what Chrome did at this time was, and this will blow your minds, we connect to the database, get an XML back and then we read that in and we do an XSLT transformation on that database, producing HTML and DTD format. So, this is actual running production code from May 2008, not today. It's much better today, right? No, no, it's not. So, this does three things that I want to point out. First, we have a static method returning a string. This string will go directly onto the response. No controllers, no nothing. It's very efficient. And what it does is that it goes directly to the database. There you go. Yeah, it goes directly to the database, populating parameters or arguments. I don't know, I think the ones on the left are parameters and the ones on the right are arguments. Well, as you can see, it shouts at the database because the database talks SAP. And we populate the stored procedure called directly. We actually make an XML header down there. And what we get back, if you go to the next one, is then run through an XSL transformation and that's put directly onto the response. This is how you create a million-dollar industry people. This is high-tech. Okay? So, if you were wondering, we weren't really very happy with this. It's nasty and it's error-prone and it's global state all over the place. And every time we did something, we ended up breaking something else somewhere on our side. Usually gift cards, they always break. So, we'd add a button somewhere and gift card stops working and we remove a button and then gift card stops working. It's horrible, horrible. So, what do we do? Well, like Thomas mentioned, we are at a bad place here. What does that mean, a bad place? This platform has served us for many, many years. The team, if you look at the team, we were five developers at this point. We were sitting in each of our corners in our room. Yes, we had a pentagonal room, yeah. Doing each of our own things based on business needs, which just sounds fair enough, right? Because we prioritized things based on the person that shouted the loudest and whoever shouted loudest got their things through. So, as we did things, based on what you just saw, we started to grind to a halt in regards to our speed and our velocity. So, we stopped delivering features, or slowed down delivering features, and started living more bug fixes. Usually on gift cards. And when he didn't fix gift cards, we fixed something else and broke it. So, if you ever get a gift card, you can hope that it works. Sometimes it does. It's been fine, hasn't it? This is fine, right? So, basically, we were struggling. We were at a point where we needed to do something. We weren't really feeling inspired. And along came a business need. Yeah, so this was when mobile phones started to get popular. I think you may remember that time, the heady days of the first iPhone. So, we needed to be able to show a mobile site that worked on a mobile phone. So, either we could produce an app, or we could produce a website that actually worked on mobile phones. But we needed a mobile presence. It was a growing market, and business came to us with this need. And we said, sure, we can do that. We just need to rewrite everything. Because if you've ever done XSLT transformations, taking XML and making DTD for HTML out of it, it's not really conducive to mobile phones rendering. It's nasty. It's challenging. So, what did we do? Well, then let's talk about Project K2. So, the case for complete, two is for two. Yes, we are good at naming. There's more on that later. So, Project K2 is our response to the business need. And the business said, we need a mobile presence. We translate adapt to be, we need a mobile first web store. So, at this time, responsive web, or adaptive web as we like to call it, but we lost that battle a long time ago. Responsive web was in its, I wouldn't say infancy, but it was starting up. We didn't have cross-browser support, and it was a little unsure. Another approach at this time was building standalone websites, which is what we went for. And that's also what we proposed. It was a good thing because that meant we didn't have to touch the old code. So, we had a... That's a really good thing. Yeah. Which meant we could do a file new project. And we thought, okay, this is great. Now, we can finally focus on the good stuff. Do it right. Do it right. Yeah. Buzzwords. Anyone recognize these? We did all of them. Yes. So, that's it. We went for code quality. We went for the solar principles. We were going to be test driven. We pulled in CQRS, DDD, event sourcing. And then there's the technical side. Then obviously on the process side, we introduced Agile and Scrum. We did pair programming. And with all this came new admin pages. A new infrastructure, a new architecture, as you see. And obviously a new deployment pipeline. It will be easy. Yeah. And what this meant was we're taking our old monolith, turning around, and creating a new one. And it was actually pretty good. We learned a lot. We learned to work as a team. We learned to write better code. We learned a lot about Agile processes. And we delivered the M-complet NO, the first mobile version of our store, within around six months. We made it. We did it. And we made it work. It was good. But what did this mean? Six months. But it didn't have gift cards. Oh, yeah. So we couldn't break anything. That's why it worked. So we had basically, all five of us had gone over to be, were working 100% on this platform, which meant six months without any new features on the existing desktop platform. Or bug figures to gift cards. Which meant that the business owners had to wait, right? So they were fine. Let's wait for this. We delivered this. And then they were eager. Cool. Now we can focus on fixing gift card. Couldn't leave gift card. It's nightmare. And we thought, okay, we got the feature request in. We started looking at what was involved. And we realized we didn't want to go back to this XSLT. We don't want to go back there. So we thought, we've more or less created a new web store. Let's do a little more. Let's create the desktop version of this on the new platform. Should be easy. Should be easy. And what happened is the business never asked for this. But we got a natural accept. We said it'll be fast. We'll see what it did in six months. We'll get used to it in six months and we were ready. And we ended up, well, building out. What we have to do is we need to get gather requirements because we didn't know what's happening in the old system. And the thing is neither did the business. They didn't really know all the quirks. I mean, when you have a system running for 10 years, there's so many small quirks. So the spec is the old system. Make it like that one. And that's what we did. And that's what they told us. So how should this work? We can do this. We can do something nice in Chinese. We can not implement gift cards. No, that was an option. But they said, look at the old system. Do what that does. And this led us to a feature parity that much. Yeah. So we spent, I think, one and a half years trying to reimplement the entire Chrome platform in K2. Just building functionality we already had. And remember, while we were doing this, the old desktop that actually was earning us money, the one that was important, it had now not been touched in two years. And the mobile platform that some very, very few users actually used because they don't like to go to M sites for some reason. And it had very, very few features. That also wasn't being worked on because we were working on the new shiny stuff that hadn't been deployed and hadn't been activated in any way yet. So after two years of this, the organization came to us and they asked, hey, guys, we kind of need to do stuff. Are you finished soon? And we said, we don't know. We didn't even know which features we were missing because nobody knew. So they did the only same thing. They killed it. Yeah. So that's like two or three years of your life down the drain. So if you learn one thing from this, rewrites are bad. I'm not saying that they can't work, but it's really, really hard. And if you can find a different way of doing it, please consider it. So this was a very dark time for us in the web team at Komplett. We had five developers when the project was killed and we lost half-ish of those. So one of the largest e-commerce sites in Norway suddenly didn't have a lot of web developers. We'd lost our self-confidence. We'd lost our respect in the organization. The organization then embarked on a year-long search for some platform. We still want to be a web e-commerce thing. There must be some platform that we can buy. And there are. There are platforms out there. Most of them have three-letter acronyms. So we launched a rather large project to figure out which one is best for us. We got proposals in where we kind of played them against each other. And we saw who had the features we needed, who had the support we needed. And after a year of this, we had, I think it was three finalists. And the web team was asked to just submit a proposal of our own for our Chrome platform, just kind of to have a baseline, because we knew that nothing could be worse than that. So we had to have something above that. And then on, I think it was a Wednesday, we actually got the call that we won. So we have it on great authority that we are the best in the world at this. Do you remember what time it was? I don't remember the exact time. So what does that mean? What that meant was they still trusted in us, or they put their trust in us rather. But what that also meant is there are new business domains that they wanted to go into. They wanted to do new things. Well, actually not day, we. And we still had a great heap of code. We hadn't touched this in actually three years now, right? And we still didn't have a mobile platform. So we're three years behind the rest of the field. And we're asking ourselves, can we do this? I mean, can we do this? We were actually really unsure. Three years earlier we just said we couldn't do it on Chrome. And now we had to. So how could the same people get trust and deliver? That's what we have to ask ourselves. So they trust us, meaning the business and the organization trust us. But do we trust us? Can we trust in ourselves? Can we get to a place where we can make this work? So they're saying that culture eats process for breakfast. So that's where we started. We started with culture. We started with values. We started with defining who we are as developers. And what it meant to be a developer in Scandinavia or Northern Europe's largest e-commerce provider. So we did this. We spent time at this one of our workshops. And we end up with 10, 11, I can't count. A number of values. These values are things that we believed in. Things that we embodied. Things we felt that we already have, some of them. And other things were things we aspired to be. This is for ourselves. And this is also for new developers coming on. Because this is the point, right? We can't scale with us five. We needed to increase and we need to expand. So? Yeah. So then we did the next thing. We just added lots of developers, right? So business came to us with needs. So the entire organization had needs. Our customers had needs. And we weren't able to, we didn't have the manpower, the muscle to do it. So we needed more people. And we hired a lot of more people. The good thing was that we'd already thought about cultures. So we didn't kind of just fall apart on that. But just adding people or developers has its own problems. So at this time, we were like five developers. And we had a QA guy and we had two ops. And that was the web team. So pretty small, right? And we needed to scale. So we looked around. And if you've ever searched about scaling agile, there's a lot of talk about that. But there's a kind of local firm that seems to do this pretty well. So we looked at them. They're not Norwegian. They're Swedish. Spotify. You might have heard of them. So around this time, they published about their Spotify model. Have you heard of it? Raise of hands. Yeah, good, good. Then we can go quickly through this. Yeah. So this is a part of the Spotify model we've tried to cut out. It's just to go quickly through this. It's basically built around autonomous squads who deliver to a specific business need with their dedicated product owner. They're autonomous, meaning they have control over their build pipeline. They have control over everything, deployment, all the testing, and direct access to whoever is the representative on the business side. These squads are then grouped together in tribes. And these tribes usually represent a logical business area, so crossing different business domain. So across squads, we then have something called chapters. Chapters basically meaning you could be interests, right? Front-end interests. It could be older DevOps. It could be something domain specific. And then we can have multiple tribes. This is how Spotify did it. One of the core features or one of the most important aspects of the Spotify model is that each team is co-located, right? So each team has to be at the same area, same place, same office, and work together. So we took this, and this is awesome. This is a blueprint. This is something we could use for ourselves, right? And we take this, and we start with our major project called the Minion Project, which is a... We can get to that. So this is Bob. He's in our office. Yeah. He runs around all the time. Fun guy. So the Minion Project is to not do a big rewrite. It is to change Chrome while it is living and make it mobile-friendly. It's nearing its end now, actually. So if you go to our sites, they're actually quite good, I think, on mobile. That's what the Minion Project is. And we had said we couldn't do this years earlier, and then we couldn't. But that was three years ago, which is an eternity, an internet years. So now we actually could. Now we had the technology. Browsers were sufficiently advanced, and we might have learned a thing or two. But still, we needed to produce at a rate that we couldn't as a few. And with the Spotify model, we thought we were able to scale our team. Ramp up. So we started ramping up. This is an illustration of how we ramped up over time. The time area right now is beginning mid-2014. So we started scaling against multiple offices. And so we scaled by adding consultants, basically, because it's really hard to get good developers locally. And added smart developers, good people that became part of the team. But something happens when you throw many people into a monolith. So even with the Spotify model and starting to split up people into squads and talking about business domains, we still had one code base. And now we had many, many, many developers working in that code base. And if you've ever worked in a shared code base, I know Facebook can do it, and I know Google can do it. But really, we are not Facebook or Google. We're not that clever. So what happens is that one developer does something to make some new feature or fix some bug and then give car breaks. And another developer does something else, and then that doesn't merge with his thing. And suddenly you have long-living feature branches that are weeks out of master, and you need to rebase them, and you can't rebase them because things have changed horribly. And it's just not working at all. So what do we do? What do we do about this? Well, our problem was that when we actually managed to merge something and deliver some deployable thing, and we put it on our test systems and it never worked, gift card always breaks, and everything else always breaks. So we had things living in our test systems for weeks before we could almost never, but we could dare to put it in production and then figure out what breaks only in production. It's a bad place to be in because with such a large monolith with many people working on it, actually stabilizing it is a non-trivial task. So we figured out that we need to change. We can't work like this. We need to, just because we're not good enough, I guess, to make such a thing work, we need to split it up into smaller deployables, just because we need to get something to production quickly. And we sat down and we made an architectural vision for COMPLET, which is its COMPLET vision. I'm not saying this is what everyone has to do, but we believe that for our domain and our organization, we need to have a landscape of services. We need to have many services that live in the environments, that connect to each other. We need those services to be stable in their contracts, so we are not allowed to do breaking changes on services. We actually go as far as to say, if you are breaking your contract, then you are making a new service. You should not override the old service. You should deploy a new service beside it and tell your clients to start using that instead. We needed a consistent deployment, and we had to do this while our system was running. So how do you do that? Well, you take whatever new part of the monolith you need to split out and you make some service for it, just because you can't do it in the old one, because gift card will break, and you put a feature switch around it. So you deploy your new service and you make sure it works, and you can actually make it work because it's smaller, so you are able to understand it. Once we're happy with it, we've seen that it actually works in the development environment, then we turn it on by the feature switch in the test environment and check that it works and fix whatever doesn't work in test, because there's always something. And then we do the same in production. So we separate the actual deployment of the services from the activation of the services through feature switches. So it's a strategic choice. It is done at a different level than the technical level. As developers, we don't decide when something turns on. That's up to the guys who are responsible for that part of the site. So we have people who are responsible for the checkout, and they decide when we turn on and off stuff in the checkout. So that's how we decided to do it, and it's been working relatively good. But the problem is, to create a new service, it sounds easy, right? But it's not, because you need to make a new repository, you need to set up the build jobs, the deployment pipeline, you need to manage the servers and make sure they're up and running and ready to be deployed to you. You may even need to create databases and all of that stuff. And that meant that when we told the developers to please don't make this in the old one, make a new thing, it's like, I don't want to do that, because then I can't start actually solving the problem for a week or two, because it takes me that long to actually get it out and running. And often they kind of short-cutted it, so they got it running in the development environment, and nobody really cared about test, because it's not going live for a month, right? And then you have to take it live and it doesn't work in test. So what we had to do is we had to make something that can make a service. So we debated the naming of this. One time it was the service maker, service maker, but it's now just the service maker. I think we also called it foundry at one time. Well, it doesn't matter. So it's an interesting little thing that's been very, very important for us. So we have a developer who needs he or she, needs to create a service, so they go to the service maker. And the service maker is actually a website on our internal network where the developer enters the name of the service and whether or not that service is going to need a database. So they go to the service maker, and they press the create button, and the service maker goes out, connects to Bitbucket in our case and creates the Git repo. It connects to our build machine, which is Jenkins for now, and it connects to our deployment machines, which is Octopus for now, and it sets it up on all of those. It also goes to the database service and creates new databases, and the build service is allowed to use another service's database. So if you need the database, you get a new one. Yeah. Once it's done that, it actually then commits a template of a service to the Git repo. That's picked up by the build server, which builds the service, because that's what build servers do. Then it pushes that to our deployment server, Octopus, and that then deploys that service to all environments. So after about 30 seconds from the create, they have an actual running service all the way. This has made it immensely easier to convince developers that you should create a new service instead of doing it in the old one. So what we've learned is that if you are going to have a service architecture, creating services must be cheap. So we've talked about how we do this technically, but how do we actually do this from a process and business standpoint? How do we actually push work through our system? So we mentioned that the Spotify model brought us this far. But we were feeling pain, right? And being strictly location-based didn't really work for us. We didn't have that many developers, or it just didn't work for us. It's not how our organizations grew together. So we went back and looked at our model, and we thought we don't like those squares or things, we created circles. Circles are much better. This is what we would like to call the complete model. It's circular. This is actually a snapshot of how the team is basically screwed together today. And just to kind of walk you through this, each circle is then a person. Each small circle. Each small circle is a person, each white circle. Each color circle is a squad. And each squad has a color which identifies them. Not a name, not a feature, not a business domain. Because you're not guaranteed to work in the same thing forever. So each squad has a color, and we keep them as stable as possible. You want the same people working on the same part of the domain. So a squad was built up of four to six developers with their own dedicated QA. So someone who's expert at testing. How they solve QA within the squad, that's up to them. But that's what we have. We have two dedicated DevOps, and they're outside the squads. And then we have the orange squad up there, which is our infrastructure team. Each of these squads are actually focusing on delivering business value. And they were working within a feature set. So, as an example, the red squad here is solely responsible for the checkout and nothing else. The black squad is solely responsible for the shopping cart and what's involved there. And these are now different parts of the code. This is something we've split up. We split up the code into a different deployment unit, and that squad is responsible for delivering the value there. And each of these groupings, these larger circles, they are a logical grouping across multiple domains. So in the Spotify model, this would be similar to a tribe. Within this tribe, we then have one or more UX responsible people who speak to the business experts and gather requirements and work with them directly. And we also have a delivery manager. So this is basically how we're set up now. And the infrastructure squad is responsible for making the lives of each other squad a lot easier. So we make the service maker. I'm in the orange squad. We make the service maker and stuff like that. So you're the service maker maker. Oh, yes. So what we've really done here is we like to call it the reverse Conway. Because Conway's law says that any architecture will kind of reflect the organizations and the petitions in the organization. So we've tried to petition the organization the way we want the architecture to be. If that makes sense. But again, we've talked about people, we've talked about scaling, we've talked about everything, but how do we actually work? So the real key here is basically over how we've learned experience is communication. And none of these squads are co-located. So each squad has always got members from at least two locations. And one, I think, actually has four, which is almost as many as there are people in the squad. Not counting people working from home. Yeah. And this is where we really break from the Spotify model. There's reason for this, right? The reason is bringing aboard new members, especially when you're in remote locations, is then it's really hard for them to actually get the domain, get the culture. And by having a squad which is spread with representatives in the main office in Norway and with other offices, then it's easier to actually spread knowledge, bring people aboard, and the onboarding process becomes easier for everyone. But communication is still hard, right? We still need to have some way of bringing new developers that aren't nearby, near to us, I mean closer to us. So what we have done, one of the things we've done is basically set up a window into each office. And we have something called a pyramid for this. That's me. And that's Thomas. Yeah, so we have multiple of these screens where we can peer into each other offices. We're not kind of watching what people do. This is just about glancing and seeing who's there, seeing what's happening. Are they having cake? And that's a lot of fun having cake. Because then we're standing on the other side and looking at the camera. We want cake, please. Send cake here. We dance sometimes and yeah, we have a lot of fun. So this gives us kind of a window into each other. It's easier just to say, you know, we're there all the time. And beginning was awkward, but now it's natural. And again, continuing on communication, we have something called Lodok. It's a lot like Slack, except this works. So, for us, for us, please remember green, green notes only. It's just the major difference between this and anything else is it's not as fancy, but we have something called threads or flows, which makes communication a lot easier. This is a shot from our water cooler. You see a very beautiful man over there. He's known as... Pretty Thomas. I'm the other one. Thank you. And if you... I'm not sure if you can see this, but if you see the rooms on the side, the rooms are basically whatever we need in regard to squads and structure. We have other rooms like interest rooms, like the checkout room over there, and then we have something called code upon L. So this is the code panel. It's something we do on YouTube. Then we have something called KDC, which is our complete developer conference. This is where we gather together yearly and do things together. So basically we create channels based on whatever we need, right? And we have J4s, yes, I said J4s, on the water cooler. We also use a lot of Skype and TeamViewer for pair programming. So the main learning here is... I mean, it should be obvious, but organization matters. But we tend to not care sometimes, right? So it just emphasizes organization really does matters. So we're making a landscape of services. That should be easy, right? Each service is a small thing. It's not hard to make a service. We've made sure of that. And then let's just do it, right? Well, it turns out there are different problems with services than with monoliths. So how do we make sure that our services are actually up? How do we make them perform? How do we make them scale in ways that are relevant to these services? Well, each service has an SLA to its clients. We use a lot of caching to make sure that they perform at that level. And we are very, very into performance testing our solutions. So whenever we launch something new, it has been performance tested quite strenuously. But we also need to have the services have some common characteristics. And we make that easier through the service maker, but they need to do stuff like they need to have known health endpoints so that our infrastructure can make sure that they're up and say that it's safe to actually send traffic there because services go up and down all the time. And we need them to log to a centralized logging service in a specific way so that we can recognize requests throughout different services. So if you actually go to our frontend and do something, that might be five services, ten services, involved in serving your request in some way. In a monolith, that's quite easy to log because you just log and it's all from the same place. But when you are in a service landscape, it gets really hard to figure out what actually happened and why didn't it work. So we've instituted standards on that, ways we log, and to make this easier, we've made that a library that we have in our internal nugget server that's actually put into each new service by the service maker. So we use the service maker kind of to drive the service behavior that we want. If you make a new service now, it already has the endpoints that we expected to have, and it already has all the logging mechanisms. You can just log as normal and it will log everything that we expect to find. So we've had to do quite a few of these things, and that's what the Orange Squad does. We figure out what turns out to be the best practice that we want our systems to follow, and then we pull that out of the projects that started doing it, usually as a nugget package or something, and we put that in the service maker so that from now on all new services do that. So the learning here is that services aren't monolith, they are different durr, and they have different needs, and they have different characteristics, and you need to be aware of that. So where do we go from here? One thing we haven't really spoken about is what we do within each squad technologically wise. We are kind of connected to an existing technology stack. We are used to working with.NET, we used to working with knockout, is what we have in our main store now. But because we now started splitting things into different autonomous units, we are now available to have the opportunity to work on completely new technologies. So, which is what you touched on, right? So usually learnings come through squads doing new things within their domain, within their solutions, and talking to other squads saying that's a good idea. Why we also need it? And they might start implementing it, and then this is what Orange short comes in, right? So what would be cool though is having fully mandated squads, or completely autonomous squads, where we actually then pull in our operations, we pull in the business users, there's no people in between. So we are really interested in trying to have complete business units, including developers. There will be more microservices, but how small they will be is something we are unsure of. Because we are doing our best to stick to the business domains in regard to what we are spread out. We think about business context, we think about our domain, right? But we have to take some baby steps along the way. So we are not religious about the size of our microservices. Size shouldn't matter, right? But we need to have them smaller than our huge monolith, that we know. So we are also looking into exposing some more APIs, right? We want to expose public APIs. I hope we get there soon, but we should get there. We don't think we can make everything, but we have a platform of lots of things, and it will be really cool if someone made Android apps or iOS apps even. What other way around? I like Android. And obviously more business areas. So just to summarize, I mean, scaling is hard. And we have to find our own way. We are still finding our own way. We are not there yet, but we are getting there. Culture is crucial, right? This is number one in regards to getting people on board and actually building a team. So we also want to really embrace and learn from failures. This is really important, especially across cultures. Failing should be something everyone does by default. It should be the default way of approaching things. And that's actually really hard when you are working in several different cultures, because different cultures have different views on failure. So that's been one of the things we had to work on. So we now call them fabulous failures, just to try to make it a bit better. It's a good thing to fail, when you've learned something. And we're going to continue failing. And hopefully, we're also going to continue sharing as we go. We started doing things in public. This is one of the things. And we'll continue as much as we can. But one of the greatest success factors, which you can feel on a day-to-day basis, is are your developers or is everyone happy? It's because happy people make great stuff. So make them happy. And then they will probably make good stuff. So, let Denise take a picture. I can send you out first. So we'd like to get feedback on this. We want to hear more from you guys. We want to hear if this thing makes sense, right? Yeah, contact us on Twitter. Talk to us now. You can also follow us on the CodePanela, which is a YouTube stream. We try to do it at least once a month, perhaps twice a month. It's in Norwegian, sorry, for our non-Norwegian friends. You really should learn it. It's a great language. So easy. But we try to interview people and do stuff there. And Pavel has a blog that you really should read. It's not Norwegian. That's in English. Yeah. And I'm just on Twitter. So I hope you guys have learned something or got some ideas or inspired to do something. Or have feedback for us. That'd be awesome. We want to go more into detail moving forward. And we hope you guys leave green notes over there or press the ring button as many as possible. And if nobody wants to go home, we can have lots of questions today. So... It's in the tens. It's not in the hundreds yet. But we believe we will reach at least 100 by the end of the year. All written in the same language? Mostly. We are, as I said, we have this service maker which makes it really easy to make services with the technology that we usually use. We have a few Java services that we are phasing out, actually. Because C-sharp is cool. And I have a bet going on. So I want to be the first one to actually produce an F-sharp service. There is no requirement on that. So you're open for a new API? Yes, we are. But the easy path right now is.NET. So that's the thing most of our developers do. You're not too worried about getting the whole set of languages? We are moving towards deploying our services to service fabric. That's what we're now looking at. Not quite sure whether it will work or not. But in that you can run anything you want, as long as it's executable. It can run, they say. That is the thing that's further down the line. I'm sorry? I'm just thinking further down the line. When you have first-person people, you can have a switch, then you can start. Yeah. That's one of the things with smaller services. One of the kind of goals is that they should be so small that if there's a bug, you can delete it and start a new. And that's not a huge cost. I'm not sure that many of our services are that small yet. But it's certainly something we're hoping to get towards. But as I said, we don't think there's a magical, this is the right size for a microservice. What are the examples of services you have? We have, of course, a catalog service. That is our catalog. We have services for actually placing orders. It's for getting users, for doing all of these small things. We have some services that are more like verticals. So as I said, our checkout is a separate site. It looks like you're still on the same site, but actually you're not. And in these days, we are... My page, or your customer page, is also a separate site? Yeah, with separate services underneath that, that handle the storage. Right now, we are actually in the process of deploying a new checkout for our BTB customers. That is on a different technology from our other checkouts. We have two different checkouts on different technologies. Not very different, but different. Different from a second. Yeah. And that's kind of what we want. We want to be able to test out news things. How many versions of these services do you have on the network? Yeah. How do you name them? Well, we named them... Sorry. So the question is, how many different services, different versions of services do we have deployed? Since we say that we don't want to deploy... I'm sorry. We don't want to deploy over. We want to deploy side by side. So this is not something we are very good at yet. So we are working on that. And it's also something that becomes more important as things stabilize. It's in effect when people are moving quickly. They are usually communicating on both sides of this communication. And then it's not that important to us. So the things that are stable are the things... And in use are the things we don't overwrite. But we have some things that are in different versions, but not a lot of them. Because we go into different kinds of problems there with the data storage, for instance. They are not allowed to use the same data. It's not as easy as it sounds. I think our time is running out. We have more questions though. So we can take one over here now. You, on your diagram, about how you are organized, you have squads and in a larger group of squads, you have two squads. So the question is, if I understand correctly, if this is representative of how we are organized today. We have two tribes, which represent the two areas of our domain. How we see it now. Based on how the business is also structured. We have a before buy. Everything about getting a customer into the store. It's where we do a lot of scaling, a lot of work on presenting content. And we have a kind of during slash after buy. Where we do a lot of interactive pages. We do the checkout, you do your cart, you do your MyPage, that type of stuff. And yeah, the orange squad is something... You have seven people on the each squad. So in a way, we have a number of people on each squad. We have around between four and seven, basically, depending on how we do things. And we do, when you start something new, we do a task force approach. Where you may split off two people, three people. They start working something, just to get the service up and running. And then the rest of the squad just finishes up and joins them. They stay within their tribe and do different... Each squad has at least one dedicated QA. And the rest are developers. There's one lead who is more responsible for the delivery. But he or she is usually a developer. And how they solve this world-wise is up to the squad. So the question is, how do microservices communicate with each other? Mainly by rest. So they call each other. This is one of the things we are working on changing now. Because we think to be able to scale, we need to push information out instead of having them get it when the customer is actually coming in. So we are reworking to get a push architecture where when something happens somewhere in the system, a message is sent. And then the services that are interested in that, they do their work. So that when the customer actually kind of knocks on the door, we don't have to build the store. It's ready for them. This is also one of the things we are now discovering as we move to a service landscape. Because if you do a lot of requests, then you end up in a situation where you deploy something new. And if one service is down or unstable, then that kind of bubbles all the way out. So instead of that, we want it to be so that when a service, when some event happens in the system, it propagates it all the way out. So that if something goes down in that pipeline, it should still be up as far as the user discovers. You might not get the latest updates. So you might see somewhat stale data. But at least we will have data. Okay. How much is left of the monolith? The question is how much is left of the monolith? I should have checked that. Quite a lot. Quite a bit. Quite a lot. We still have XSLT. Actually, there are pages where you will be served by the XSLT. But mostly you can avoid them, right? Most of the shopping experiences actually been moved over. Yeah. It's things like the feeds and stuff that's not really important. But the main things that a normal user encounters, those are, well, on the new technology. So, yeah, we have one more. When a squad develops a service, do they have to keep maintaining it during the lifeline? Or does that responsibility transfer to another squad? Yeah. So the question was, when a squad develops a service, do they have to maintain it for the lifetime of that service? Or does it move to a different squad? And the answer is that it's there. So there are certain cases where, for instance, if one squad has an area of responsibility and they don't have time to develop something that has to be developed, then a different squad can do that and then transfer it. But usually they own the service as long as it's alive. But since we've elected to do it by business area, it might be that the people who are responsible for master data will have to maintain something someone else made. Okay. Thank you, guys. Thanks so much for being here. Have a nice weekend. Green.
How we're scaling the architecture, ecommerce platform and our business in one of Europe’s largest e-commerce providers & changing with the business as we go. Growth problems are good, it means your business is doing well. How can we build an organisation and architecture that gives us room to grow and change, while still keeping our customers happy? At Komplett Group we are going through growth as an organisation in size and scope. In this talk we will dive into how the development team at Komplett are breaking up a 10 year old platform. By making every mistake in the book we’re building a platform for our future! We’d like to share our experiences in scaling the architecture and development team. We are breaking a monolith into micro-services to match change within our organization. At the same time we have scaled our web development team from 5 to 50 , and we're not stopping here!
10.5446/51878 (DOI)
Weiter geht es jetzt bei uns, ja erst mal geht es weiter auf Englisch. Ja, der nächste Talk wird in Englisch. Und es ist um die CCC Camp von 2019 zu bauen. Aber in der Game Engine. Also es wird ein 3D World sein, wo man interaktieren kann, aber nicht in 2D, wie in der Workadventure, aber in 3D. Es ist ein Projekt von Richard McFly. Und sie werden uns in Game Engine und die Arbeit zeigen. Und das Main Ding ist, das ist eine Chaos-Experience. Es ist ein Anruf, um zu reinkommen. Es ist ein Anruf für alle Kommunen, die ihre eigenen Asenblasen bauen. Wie sie es auf der Camp gemacht haben. Wir haben einen gesetzlichen Level im Endeffekt. Mit allen verschiedenen Villagen. Es ist ein Unity-Projekt, das ist ein Open Source Programm. Alle können es nutzen. Man kann zum Beispiel Blender bauen, um die Modelle zu bauen. Das ist frei. Es ist definitiv wertvolle Erleichterung. Es ist eine der stärkeren 3D-Programms. Also, ja, ich bin jetzt auf Richard McFly zu beherrschen. Willkommen! Okay, ich denke, wir sind auf. Hallo! Ich denke, wir sind jetzt über die Stream. Es hat ein bisschen Verlust, also habe ich eine schöne Introduktion von Mepang. Hallo, ich bin der McFly. Ihr wisst, dass ich von Milieway, von einer oder anderen Camp oder Konferenz, oder etwas anderes in dieser Beziehung. Und ja, ich habe Richard gesehen, er hat die Gäne gebildet und wir haben die Idee geöffnet, um die Modelle zu bauen. Und, hallo Richard! Ihr中at Bar softly, ja, es ist rifle head sovereignty. Ja, du hast bean乔chlungen, die seit jamais haben, Ja Ja, okay, so, bye ja. Schleite. Nein, meine Schleite ist nicht auf jeden Fall verändern. Ich denke, die Lüheur ist gestorben. Awesome. Da geht es. Offgrid, du hast schon eine Idee von, was Offgrid ist. Wie hattest du das plugschauen? Sie liegen בה. alleine. Okay, können Sie es wieder versuchen, wenn wir Sie wissen können? Ja, wir können das wieder versuchen, von dort zu kommen. Ja, ich glaube es ist ein bisschen chaotisch. Kannst du sprechen, dann sehe ich die Pigle. Ja, ich will etwas sagen, aber nicht relevantes, was ich dann wieder erinnere. Ich versuche, mit etwas auf die Karte zu kommen. Sie sind jetzt da. Okay, dann gehen wir zurück. Cool. Danke, Richard. Mein Name ist Rich Metz, und wenn du das Intro-Bit missst, oder ich war silent während der Intro-Bit, mein Name ist Rich Metz, und ich bin einer der kleinen Teams der drei und ein halben Developers, die Offgrid machen. Wir sind nur ein kleiner, bedrohlicher Team, und Offgrid ist unser erstes bedrohliche, original Game, mit dem wir uns auf die andere Leute verabschieden haben. Offgrid ist ein Game über Aktivismus, Informationen und Disinformation, und Data und its Prolifizie. McFly, deine Slides sind unten im Moment. Ja, ich bin... Okay, dann gehen wir. Wir müssen das jetzt so machen. Die Vollscreen-Mode war nicht gut, zusammen mit den Stremeln. Aber wir haben nur ein paar Slides. Mit dem wird die Demonstration sein, und das funktioniert. Zu diesem Grund, warum ist es eine leute Geschichte, denn ich bin nicht sehr gut be Simon LDOin. Offgrid war jemals Ende 2011 aufrem Shahin Menowen. Von Anfang 2011, bevor viel von Die Occupy wiederiantvana sind. Ich habe einen Talk-Einech-Konferenz gesehen, das ist ein Dolb-Internet-Protokoll-Forum. Es gab viele sehr dünne Technik-Talks. Aber auch Evan Mogel war dabei, um zu sprechen, wie wir die Waffe der Internet verloren haben. Es begann, ein open-source Projekt zu machen, und wir haben Python und Pearl gebaut. Viele Stakeholders haben es übergelegt und von uns. Er war ein Rallien-Krieg, um die Leute in open-source Projekten zu verabschieden. Er wollte die immer-kommercialische Internetbewerbung bewerben. Er sagte, es sei alle vorbei, und es war nicht viel zu tun. Aber er hat es aber auch immer geholfen. Das hat mich in den Moment verabschiedet, um ein Spiel zu machen, um die Lose der Internet zu privaten Interessen zu machen. Und die Idee, ein Spiel zu machen, um die Internetbewerbung zu regnen. Es ist eine nächste Zukunft-Distopie, eine andere Art der Zukunft. Wir haben viele Aktivisten, die prämiativ gestartet sind. Das ist der Fall, wo die Geschichte startet. Mit dir spielt ein ganz anderes Mensch, ein neuerer, einer, der nicht ein neuer, sondern ein komplett technischer, mit dem sein Vater ein potentialer Hacker oder Aktivist ist. Sie beginnt sich in Trouble, und du musst die Internetbewerbung lernen. Es ist ein starker Unterbelly, um zu arbeiten, was passiert. Wir schauen uns das mal an, weil wir etwas haben, was da ist. Ja, das ist der Kross. Das sieht so aus, dass es funktioniert. Es gibt kein Sound, aber ich kann es erklären, was passiert. Solches çıkar von Tucker, dasrollo, mein Vater thermatische Knüber, Mag er sich never abeousen? Das ist Ihre Daughter, mein Vater im Morgen. Elpaakt ADAM kann dir sagen, ob das Audio】enders dividend Corona Doo diplomacy und bisschen audience? Ja, es ist normal części builds overlapping Du bist ein Doppelgerichter, und sie sagt, Happy Birthday, Dad. Ich habe dir etwas getroffen. Ich habe dir etwas getroffen. Und du bist so, oh, das ist toll. Du öffnest es und sie sagt, ja, es sind nicht die Fansen, sondern die OS-Zone. Oh, okay, ich werde es wahrscheinlich nur über die Topfin reden. Wenn das Sound funktioniert, dann schau her. Oh, und ich habe alle deine Lieblingsappen gelegt. Wenn du auf Bluetooth gehst, dann kommt er gleich auf. Bluetooth? Ja, ich sehe. Ja, ich schau hier. Oh, okay, ich sehe. Was eins tut. Fan. Kenntép mir meine Begr acuerdo, schwit accepted. Sie calculate Permanentto, der egg. We are at out. Hab ich gehört und müssen das ab lawyers stillways? Das Dreys an. Da muss noch was ersten müssen, Karena ich und mich etwas gegen michlawschere. Dennis?? The agent pushing your breakfast at your table for no reason. This is a national security letter. Under the official secrets act it is hereby a crime to talk or communicate about this incident. Your daughter will be in processing for 90 days. We'll call you. Okay. And that's that really, that's the setup for the game. Das ist der Setup. Gut, so, wir reden ein bisschen über die technischen Parts, auch weil die Idee ist, die original Planung zu integrieren, Millieway in da, aber jetzt kommt es zum Spiel. Also, der Spiel. Der Spiel wird auf Unity gebaut. Ich schaue hier, Unity kann frei sein, und es gibt auch free Asterisks. Das bedeutet, dass du die Version für die Level und die Mapfiles brauchst. Also, frei? Ja. Die hat es zu registrieren. Ja, Unity ist, unfortunately, frei, in Gratis, aber nicht in Libre. Aber es ist ein commercialer Engin, und für uns hat die Wahl, ein Spiel zu machen, obwohl es ein langer Projekt war, anders als das horse-brainer�� ident скоро, was man nicht mehr aufhört und die und die gerade schon schnell durcht testimoniert. Da machen wir es nun, und das muss in offene und millions bald für ein Billard-Stink torto beschäuns encouraged Okay, aber der echte Talker ist eigentlich über den Game Mod. Ich habe dir gesagt, dass wir schon den Plan haben, dass wir die Milweis in die Mod in einem Korne in der Mitte stellen. Dann hat die CCC Camp geschaut. Ich glaube, das war eigentlich nach dem Camp, und wir haben dann auf dem Camp starten, um die Milweis CCC Camp zu bauen, die Resultate in Milweis haben. Wir haben gute Bilder, Milweis haben eine halbe Zeit bis zur Überview. Und auch die anderen Dinge. Aber sehr schnell hat das es gelöst und wurde in den Mod mehr als die ganze CCC Camp. Ja, so, das ist so, wie es so wie ein screenshot. Es gibt viele, die sich das erinnern. Ich hoffe, dass ich schon ein Memoriam für die Leute hierher kreiert. Ja, also, das Ziel für uns in der Mod ist, dass wir eine Virtual CCC Camp in der Unity haben, wo wir umgehen können. Es funktioniert wirklich sehr gut, um diese Gefühle zu schaffen, dieses Punkt, die ich mir auch sehr gut erinnert habe. In diesem Spiel haben wir elektronische Devices, die wir interagieren können. Es sind einige, die auf dem Camp sind. Es sind einige, die ich auch. Und wir haben uns an einigen Stellen gedacht, dass wir alle die Villagen in diesem Camp haben, die die Villagen, die in diesem Camp wollen, haben. Wir haben also einen Plan, wie wir alle in diesem Camp bekommen. Ja, ich glaube, was passiert ist, dass wir die Milweis Mod starten, und wir haben es auf den live Stream der Initial Build, die viele Leute auf dem Camp, wie die Scottish Embassy, und Garafel, die sehen, was du da bist und wollen, um sich in der live Stream zu kommen. Wir haben es auf dem Camp gestartet, wir haben die Villagen in, die Leute in die Stadt zu schicken, die mit den Leuten zusammenverkaufen. Es war ziemlich organisch, das war es nicht. Ja, Garafel ist eine Village, die die Arbeit hat, und die Arbeit ist, und die Dinge zu bekommen. Danke, Garafel! So, ja. Wie siehst du, wollen wir deine Villagen da, wenn du sie willst? Und wir brauchen etwas für das. Das ist, dass der Camp in general relativ schlecht dokumentiert, um zu sehen, wo die Tente war und Informationen, die du brauchst, um eine beliebige Image zu reizutragen. Und ein paar von ihnen sind nur in der Image, weil es viele Villagen sind, die nicht mehr auf die Bilder nehmen, weil sie keine Fotopolizisten haben. Wir wollen, in einigen Fall, deine Erinnerungen zu schicken, wenn wir das finden, wie wir das tun. Und für das haben wir eigentlich zwei Möglichkeiten. Das erste ist das Pass, dass die Garafel-Village-Tag ist. Es ist schneller, es braucht uns zu sitzen an Jitzi, und das bedeutet, ich werde meine Desktop mit ihr über die Jitzi besuchen. Und du sagst mir, nein, das braucht ein bisschen mehr da, und das braucht da. Das funktioniert auch mit unkomplikaten Villagen, wo alle Modellen, die du willst, in einem 3D-Wall, schon irgendwo existieren. Wenn du normalen Tenden benutzt, normalen SG40-Tenden, normalen Rented-Tenden, die commonen Parvillagen, die common Desk, BiaTaken, und so weiter, ist es einfach, weil wir nicht wirklich Modellen bauen müssen, wenn du nur die existierten Modellen in den richtigen Ort hast. Es braucht uns, aber es abstraktet dich von allen die Partei, die diese Dokumentation von dir selbst tun. Wie mit den Gästen, die Lizenzen, die Kisten und so weiter. Die Möglichkeit B ist eine zweite Möglichkeit. Du bittest deine eigene Village. Das ist auf den langen Zeitstil besser, weil die Zahl besser ist, aber es ist auch mehr kompliziert. Es braucht uns nicht, dass du mit dir sitzt und du etwas machst. Es hat dich zu tun, dass du es machst. Das ist das Beste, wenn du eine komplizierte Village hast, du willst das Grupp-Tenken machen, wie du das zusammenfällst. Oder du benutzt Modellen, die noch nicht existieren. Das muss zuerst in Blender, Cinema4D, E-Punk, das war sehr schön in dem Anfang. Er hat das C-Base-Party und er bietet alle die Bauten in Cinema4D. Das ist das beste Weg. Wenn du dich selbst checkst, du hast eine Entwicklung-Key, du kannst mich fragen. Ich denke, das ist weniger kompliziert. Wir senden alle direkt nach Rich. Ich habe die Entwicklung-Key für die Leute, um die Hand zu haben. Wenn sie querst kurzer auf der M contrebieten, dannстаточно an États-Mods. In Modellen restrictieren sich 66 i-Key. Ich hätte mich 뿌� adjusten können als eine Katastrophe an Direktion, keine Gewicht postponiert. Ein Ellie spoiler furnommen soll bei uns selber die EU-estaute Ja, da ist er. Das ist die Map auf der Overlay. Es ist etwas schwer zu sehen, aber ihr könnt es sehen. Es ist etwas, was wir uns geholfen haben, um alle Objekte zu verkleiden, in einem richtigen Ort zu sein. Es hat zumindest die alten Images für die richtigen Gebäude, die im richtigen Ort sind. In diesem Welt, ich nehme die Map weg, eine Sekunde, weil das eine Overlay ist. Das ist 2 Meter über die Grenze. Und wie ihr seht, eine der Villageen, die in der Beginn gebaut wurde, war die Millieways. Ihr habt diese Wachstum-Possibilities hier im Moment. Und das sind die verschiedenen Objekte, die wir hier haben. Einige von ihnen können kontrolliert werden. Das Ding, für Beispiel, ist ein Bier-Tab zusammen. Das ist etwas, was wir mal erklären wollen. Ich weiß nicht, ob die Leute hier sitzen, die jetzt Bnet-Milliways haben. Aber das ist ein Bier-Tab. Wir haben hier die Bnet-Tab von Shah 2017, die von Amsterdam-Präsidentin gemacht wurde. Das ist ein Server-Rack. Das ist ein sehr interaktes Wachstum. Man kann einen Punkt auf den Interesse definieren. Dann kann man eine andere Wachstum-Promotion verbinden. Das ist ein IoT-Tab, das versucht, die Impression zu geben. Man kann mit dem Spiel mit dabei spielen. Es gibt noch etwas, da steht ein Computer, mit ein paar anderen Interesse-Promotionen. Wir haben viele Devices, die wirklich in der Kante waren. Aber für die Menschen, das sind die Millieways. Aber es gibt mehr. Es ist möglicherweise leichter. Eigentlich... Lass uns das machen. Das ist ein Bultelevon, das ein Moment dauert. Wie man sieht, die meisten Fälle sind noch grün. Das ist ein Punkt, das die meisten Villiers noch nicht haben. Warten für die Unity to React. Das ist das Prozess des Bildungs-Legers. Ich drücke den Button, dann kann man nicht mehr was machen. Es dauert 30 Sekunden. Warten für die Demogods. Warten für die Demogods. Hier gehts, das Bildungs-Legers. Das Bildungs-Legers. Das Bildungs-Legers. Was McFly hier macht, ist, er hat ein Kopie des Spiels, das die Unity-Mod-Bilder direkt zu linken. Das ist ein Bultelevon. Jetzt hat der Spieler in den Modus geschoben. Das ist das, was wir haben. Das ist ein privates Verhältnis. Wir haben McFly mitgegangen. Das ist ein Agent, der eine Lorry ist. Man kann sich selbst programmieren. Man kann sich mit den zwei Devices sehen. Wenn du Lorry-Trolik gehst, ist es ein Massive-Speaker in einem... Was ist das? Es ist überlaut, damit du ihn sehen kannst. Er kann sich auch mit Musik beziehen. Es sieht aus wie Jake, der jetzt mit dem Lorry-Speaker-Mod-Bilder mit einem Salsa-Danz. Lorry ist in realer Real-Life mit dem Grand-Pool. Ich kann nicht ihn mit dem Lorry-Mod-Bilder mit dem Lorry-Mod-Bilder mit dem Lorry-Mod-Bilder mit dem Lorry-Mod-Bilder mit dem Lorry-Mod-Bilder mit dem Lorry-Mod-Bilder mit dem Lorry-Mod-Bilder mit dem Lorry-Mod-Bilder mit dem Lorry-Mod<|de|><|transcribe|> und mit dem Lorry-Mod-Bilder mit dem Lorry-Mod-Bilder mit den Lorry-Mod-Bilder mit dem Lorry-Mod mit dem Lorry-Mod-Bilder mit denen der Lag nano Der kleine Ar liberated monsieur Frengermasch Quan Ich denke, er ist auch enttäuscht, so dass er etwas sagen kann. So ja, das ist der größte Ort im Moment. Und wir haben hier ein paar Leute, die das Barrett haben. Ja, das ist Barrett. Ja, seine Haare sind die falsche, die momentan ein bisschen leicht. Und er ist noch auf dem Koffer. Nela ist auch im Spiel, also sie ist auch auf dem Camp geplagt. Und wir haben ein Lorry, aber bevor wir in die Dippe gehen, ich denke, dass es jetzt ein Jacken ist, da ist ein Lorry hier. Und wir haben einen kurzen Weg, weil die meisten anderen Villagen wollen sich selbst zu den Villagen adden. Und wir haben einen Weg auf dem Campsitz. Ich denke, dass Barrett und Lorry die Leute, die jetzt auf dem Camp sind, nicht auf dem Camp sind. So, hier ist der Belgian Hacker Village mit dem Bus. Und alles andere hier ist sehr grün, weil wir das gefilmt haben. Hier ist der Car Tents aus dem Seabase. So, danke APANK für alle diese. Und ihr seht, dass der Seabase immer sehr spezielle Modelle hat. Also, ja, da ist der Antenne aus dem Seabase. Und das ganze Entschrank mit ihrem eigenen Portal. Und wir wollen alle anderen Villagen in der gleichen Weise bekommen. Also, ich weiß nicht, welchen Village ich am Moment anrufe, aber wir können das adden. Lass uns schnell über den Grafischöfer springen, weil das ist auch da. Und der Grafischöfer sollte dort bemerkt werden, weil das sind die anderen, die uns eigentlich über die Spekertalks kommen. Das ist nicht der schönste Tents. Der einen hat auch einen Insider, der noch nicht verwendet wird. Und das wird auch für die anderen geändert. Also, hier ist der Grafischöfer. Ja, sie haben einen hidden Tapp. Ich habe es gehört. Aber ich kam mit dem Grafischöfer an some points und hatte diesen gigantischen Tappen von Whiskey-Botteln. Wir haben versucht, das zu reinkreieren. Ich weiß nicht, aber wenn ich die Grafische höre, dann denke ich auf ein schönes Pt-Wiskey. Das ist der Söhrt, der schon existiert. Er ist noch empty und die Tür im besten Fall. Das ist der einfache Fisch. Und wie ihr seht, ist der gesamte Landscape da. Aber was man meistens macht, um das zu machen, ist die Village. Larry hat eine Parakeet. Wir müssen das jetzt zu den Gästen geben. Das ist Friedl, Techno, MC. Ja, wir gehen jetzt schnell zurück zu der Milieweiss-Village. Weil auf der Reise, ihr seht, alle diese schönen Pläne. Ihr könnt auf die Hölle gehen. Ich hoffe, ich werde mit den Colliders in die Tür auf die Tower gehen. Aber ich habe noch nichts versprochen. Mein persönliches Ziel, das nächste Mal, wäre die Bar hier. Aber das würde ein Einzelne für die Einwohnern brauchen. Das ist die Idee, die anderen Villagen in hier zu füllen. Und ihnen helfen uns zu füllen. Und sie haben einen großen Level. Was bedeutet das auch? Wenn die Leute gut sind mit der Unity, wird das Level open source sein und kann reingesetzt werden. Ja, alle Props und alle Modelle sind für die Leute, die sie in einer share-like-Fashion nutzen. Ja, und... Lass mich hier einloggen. Ja, ihr könnt es füllen. Ihr habt eine Dataview, die die Daten, die mit verschiedenen Characters aus dem Welt genutzt werden, und ihr könnt es in verschiedene Devices in verschiedenen Zeiten füllen. Wenn ihr die Netwörfe oder die Devices-Kredenzen genutzt habt. Hier haben wir das Bier-Server, das McFlyer SSH-Tier hat. Und es gibt die User des Bier-Servers. McFly hat ein Bier und ein Lager. Sie Aviar Menu hat ihn estar riktte. Das Maryland-Bier are Attorneyiente zu Mastern shares, Simon und es ist ziemlich dick im Moment, noch weil wir nun wirklichientschporte tun haben. Alle das TimLES Nanaudience Mas kan hopulate in D bang rw.. t wo intens threw i Carla an Jacob... Gerhard, so ein Transk Boeing da So, the idea is you can kind of find interesting files throughout Hacker history, Zins, different kind of court depositions, different pieces of information that, you know, people might find interesting to dig around in, and they can unlock different devices or different conversations with different characters. So, if a village has a certain, village originating documents that can then be had, be in the village or an FTP server, in small, I guess, because otherwise everything explodes. Yeah, you know, copies of POC or GTFO, stuff like that, I'd say. I mean, and this is probably a good point to bring in Lowry and Barrett, right, to sort of talk about their Interesse in how that might overlap. Do you think? You guys there, have you got audio? Yeah, I think that was a cue Barrett. So, sounds like a natural answer. I can hear them. Can you hear them? No. Yeah, hello, one, two, one, two. Yeah, I can hear you. You just relay everything we say. Not very good sign language. Liberation Army. I think I can hear you. Good. Cool. Yeah, also, I mean, Lowry and Barrett obviously didn't make it to camp, and that's kind of one of the reasons that we've got their characters in here, right, to make sure that hackers, activists and folks who couldn't be with us at some of these events can also kind of be their spirit in the game, in the game, in the middle of the fact. And talk about it. Next time somebody does want to actually kidnap me and take me to camp, that would also be great. As long as it's against my will, it's fine. I thought there is a bad echo on us now. Okay. Okay. Okay, I can hear his plate, but I can't hear anyone of them. It's actually talking. I guess when you can hear them, the others can hear them. Can you get my hand? Oh, should we use... Have you guys got a set of earbuds that you could share left and right on or something? Pussyblem on. I've only got a fixed... Yeah, but what's the special command when my writer works? The Piggles says, like, OBS says they can be here. Without an echo, it's okay. It's not got an echo. Oh, you see those Piggles from them. Are you saying this expensive video conferencing software has less echo cancellation than JITSEE? Is that? Potentially. By expensive, we are talking about OBS, so it's still an open source project. Oh, I thought it was something baby. Subject to the same caveats. Yeah. Oh, yeah. Oh, yeah. I can hear you guys fine. So yeah. I think the idea would be in the long run that everyone and anyone can add any sort of documents and interactions of interest around camp, whether that's devices of real historic purpose or fictional ones about interesting things that may or may not happen. Gibson. Gibson, Gibson, Gibson. Yeah, exactly. We need some Gibson's. We need some fantastical devices. I mean, the other nice thing about the way that the devices are set up in Off Grid is you can create particle effects and flashing lights and smoke and fire. So there's lots of fun to be had. For instance, we can make that beer tap when McFly is getting a beer, actually spray beer all over him and change a value in his AI to say that he's covered in beer and he has to go and use the sauna to clean up or something. Because I wasn't at the game. None of this one. There was a portable sauna at Schaar, wasn't there? Yeah, I think the finish had something like that. Yeah. We were going to get one for EMF, but he got cancelled. EMF is next year though, right now. EMF is currently planned next year. In this, the best of all possible worlds, where the UK government manages a pandemic that's starting to. Yeah. Yeah, I guess we weren't thinking it would go on for this long. So, um, there's only a few minutes left. So, I mean, it'd be nice to kind of get Larry and Barrett's kind of view on the idea of a game where you can mod things like this in and why peak their interest. Either you guys up for elaborating along that line. So, the number of uses I can see, especially now that I've seen a little bit more about how it works, something like this, you know, number one, obviously, is another method of distribution of materials beyond the conventional ones, media, social media and so forth. More of the points, it could also be used in the same way I suppose the Dropboxes, in some cases, as a means of users, mod actually perhaps adding in, you know, information. You can then gamify the process of crowdsourced research by having participants go through large deposits of information that are already public, in some cases, but not utilized. I can think of a lot of examples of that, and actually, you know, to the extent that I can find, you know, elements in these documents. For instance, this signaling strat for emails, sort of in the public for about eight years, a great deal of stuff still in there. I've come across a few since I've actually, in the last couple of years, I've had a chance to go look through them. There's still a great deal of things as events proceed, as new names come up, and as we've learned, the significance of certain topics and individuals. There's a great deal to be found and processed. And so, I've always been interested in the idea of making a game out of this kind of work, because people enjoy games, and games have a different status, for better or worse, in terms of perception, and in terms of law, and in terms of all kinds of things, than does, you know, other, more crucial formats. So, that's the first thing that strikes me. I'm also interested in, you know, from our talk, your Kichdotter, Sturdy and you and I, we talked about future generation, which I don't know how much you make, make use of it here. Future generation in general, is there's a huge potential there for designing and implementing non-intuitive structures, network structures for collaboration. I know that that's not an entirely cracked idea, because I've talked to so many people about it, including yourself, and also some engineers in the US over the last couple of years, and they all seem to think there's something to it. There has to be something that someone else would have to hit the ball and run with, though, because as far as I can think on the matter. I'm sure Lowry has some interesting Finnish ideas. Well, some of the views on that kind of thing, you can kind of build quite a lot of stuff in Off-Grid, because the fact that it has a Lua framework on top of it. So, anything that you can write in Lua, you can write in Off-Grid in some sense. I mean, there will have to be a kind of download any given mod at your own discretion caveat, because fundamentally most games have read-write access to your hard drive. Und, uh, if you're downloading a mod that is running Lua, it is obviously Sandboxed, but there are Sandbox escapes and things along those lines. So, um, conceivably, that could become part of the game so people could try and create a mod that allows you to safely run snippets of other people's code, like level of sandboxing, and then people could try and aim that and exploit that. And so you could have a kind of metalevel of hacking via the interactions of different mods. You also have a subgame in which, you know, put in, you put in, you know, let's say, 10,000 Documents pulled from a certain datatrope in the last 10 years. And in the game, like, you have to figure out, so what is the connection between, say, Mjöler and Flynn? What connects these two individuals? So, find out in these emails. Those are the examples of something we already know, but we could use those as a means of getting around the problem with the press, not really getting it to follow through on things very well, if you know what I'm not wanting to. So, there's all kinds of ways, you know, that one can think of a number of these ideas. There's all kinds of potential here, I think. So, stated otherwise, how would you like it, Rich, if your lovely video game was subverted into a platform? Very easy, manless, eye insurgency. There's a pretty video game there, be sure. They're against capitalism and the state. Would that be okay? Well, that's the thing with making video games. You have to be prepared for the community to take you in any direction they want. So, if it's that, instead of drawing lots of Penises everywhere, I think we'll plumper that. I can't guarantee that mutually exclusive. Oh, that's true. That's true. Can I control my NPC and go to anyone's entertainment job and say, hey, you can put AI into the PC? I need you to call this order, just to tell them. You can. I mean, you can play as any character. So, although we've got the main hero character as the playable character here, your NPC, Barrett, or Lowrys, or any NPC anyone wants to make in this kind of low-poly style that we have with the customizable characters, basically we can swap them in. So, Barrett could be the one exploring something and then going to visit Lowry to get a conversation going about the Trove documents, stuff like that. So, yeah. The other thing we talked about before, Rich, was the potential to use this platform as a way of bridging time and space in a kind of archivist sense. So, you could go to the virtually MF camp, the virtual CCC camp, and you could watch talks from historical camps potentially through the game, through an a streaming mechanism. And that allows people to, again, attend things virtually, but they weren't able to at the time. And there's a lot of people that would like to sort of dip their toes into aquaculture. Who haven't been swept into it by the natural currents of their life. In der zweiten Life, was so was something like that, that was done, we know it was a fence, an actual, as you could walk around, as you could walk around in the morning talks. And of course, those were the first targets we had back in the 4chan slash B slash stage. So, they got their toes good run into agriculture. Yeah. Without even knowing what they were going to have. And I guess the other main point is that video games, rapidly, if they haven't already supplanting traditional, more broad-cast, less interactive forms of media, is the way that culture is elaborating and reflecting upon society. So, having a game like this, allows there to be a slightly more realistic, slightly less stayed in cliché portrayal of hackers. Hackers in kind of films, it's just like a modern day version of a wizard. It's just a sort of lazy means to achieve the plot. So magic. Was a wizard, the Nintendo one? The wizard? Was that was one of the Nintendo? One of the Nintendo. And with Fred Savage. Fred Savage? In our movie called The Wizard, Fred Savage was in a Tino Competition. When the Nintendo fights. I don't know. Let me hold on a second. You got to go. No, not Fred Savage. To complete the point that I was making, I was having a potential interest in elements that I now forgot. Well, you're essentially saying that usually hackers are a plot-ridging sort of character. Yes, so there is sort of more things to it. Then, oh, here is this person that you have in the crew that makes things happen by tapping on a keyboard. You know, it's the actual relationship that hackers have. People that explore the frontier of the potentiality of technology interacting with society. And in some ways, their responsibility for the way that progresses. So, talking about the internet that we had lost. It's some people, not saying any people watching this, but some people that are a little bit comfortable in the industry and allowed them to take over and banalise the internet. But we are always on that frontier to the extent that you are exploring new ways that technology can be taken apart, put together and made to do interesting things. Then you are working on that frontier. You are clearing the routes into which the future is built. It would actually give people a taste of that in the participatory question. I think off-grid mechanics are all built around that notion of the pen is mightier than the sword and information as the battleground and all that kind of stuff. So, it isn't just surface here. Anything that you want to make with data being a precursor to the interaction is going to be central to how the characters react and are manipulated. It's also a generational thing as well. A lot of people still don't realise that we live in a data economy that is most valuable global commodity. The economy depends on the flow of. It's now data surpassed oil and fossil fuels. So, for people to see through playing this game what gives people power in a virtualized environment or digitized environment is having a rich interaction with the data levels that are constantly flowing. Just seeing how much of an ability it gives you might give people a better appreciation for what they are giving away daily by not engaging in informed consent negotiations with services that they sign up to, such as strip mine. So, das war definitiv eine Inspiration in den Anfang der Spiel. Das ist ein perfekter Ort, um es aufzutun. Ich denke, McFly ist bereit, den Outro zu machen, weil ich denke, dass die Session fertig ist. Ich möchte auch Danke an Larry und Barrett, für das Ersatz von der Hack- und Aktivist-Perspektive, um zu erklären, warum es interessant ist. Danke, Leute. Danke, dass Sie mir das an den Weg kommen. McFly, kannst du mich hören? Bist du bereit, den Outro zu machen? Ja, danke. Danke für die Chance, das hier zu zeigen. Es wird ein schönes Milieways-Party. Es wird noch ein weiteres Gespräch tomorrow, das jetzt auch in der Schedule ist. Es geht um Postcards zu Hackers & Jail, das ist etwas, das wir uns immer mit Milieways machen. Danke für die Zeit. Es wäre schön, wenn Sie mich an die Werte kommen. ALSZ kναpp, intermedi aggregate ja, dann können Sie und wir alle nice evening. Happy RC3. Hack the Planet. Hack the Planet. Hack the Planet. Ja, that's the message. Thanks a lot to Mitch and McFly and guest and the guest parrot. Join the community. If you know some people who know their way around 3D, tell them to build your own villages. So they will be realized in the next version of grid game and of course for the next virtual meeting. And we do we did change the program a little bit. So McFly will be doing his talk Mail to Jail project that you might know from Millieways. The talk will be tomorrow at half past five. So we will be in Millieways. The talk will be tomorrow at half past five und in Millieways.
We have started building the CCC Camp in unity as a game mod for a hacking game called OffGrid. We have build the rough camp site, milliways and most of geraffel and the c-base village. We would like to extend this to more if not all of the camp site and create a digital walkable memorial of the ccc camp 2019. We will be using the rc3 to work on this map mod. This workshop is for anyone who wants to come and learn how to mod 'Off Grid' - a hacking game about surveillance and data privacy - and make their own levels and hacks. Come and hack the game with us! An inside look at how Lua moddable videogame 'Off Grid', which is currently in development, can be used by modders and hackers to explore hacks and vunerable devices, and to open up hacker culture and infosec stories to future hackers, both young and old! We'll be going over how to use our LevelKit to build levels and how to mod or create your own Lua scripts to make your own righteous hacks, devices with abhorrent vulnerabilities, or data types that will alter space-time. Come with a laptop and some unique hacking ideas and we will help you with the rest. No previous knowledge of Lua or game development required :D Your machine will need to be able to run Unity engine, (https://unity3d.com/unity/system-requirements) - Windows is the most stable platform, there is a beta version of the Linux Unity editor available but it does have some issues (hence not RC yet) so worth bearing in mind. Before attending if you could download Unity version XXXX that would be great (https://store.unity.com/download?ref=personal) Also having Steam installed would be a boon, as this is the easiest way for us to distribute the tools. (http://store.steampowered.com/about/) But if you aren’t keen on the platform we will be able to pass you the files directly, you will just have to download some files from the internet. :) Videogames have a culture of being a unique medium for exploring complex systems and ideas. 'Off Grid' is a third person stealth / hacking game which forgoes weapons for hacking tools and ingenuity, and it is completely moddable. The idea being that players, modders and hackers (many of whom caught the hacking bug by messing around with games in the first place) will be able to use the game to tell new and interesting stories about hacking and surveillance, create new hacking tools in the game, and who knows even drop a 0day or two for real... Expanding on a talk given at Electromagnetic Fields camp in the past, Rich will give a live demo and talk you through Off Grid, and show how the moddable API he and his co-devs have been developing means that players and modders can not only use Lua to build new levels but also their own hacks, hacking tools, data types and vulnerable IoT devices. Although the visualisation is an abstraction of hacking, all the underlying principles are made to reflect real live circumvention tech and hacking tools. mc.fly will walk you through the ccc camp 2019 mod and explain how to design and add your village to it.
10.5446/51880 (DOI)
Hallo ihr Lieben, ihr seht schon im Hintergrund das tolle Bild, also alle werden denken so hey ich wollte ja schon immer wieder mal an den Strand und es war dieses Jahr nicht möglich, aber wenn es so aussieht dann glaube ich will das auch keiner von euch und deswegen stehe ich hier, mir geht es darum euch zu sagen was kann ich tun damit es eben nicht so am Strand aussieht und damit es möglichst einfach als Einstieg ist was kann ich eben tun um meine Umwelt besser zu pflegen oder besser zu behandeln habe ich mir heute überlegt was kann man euch sagen was möglichst einfach ist zum Einstieg ich habe schon vor ziemlich langer Zeit angefangen mich damit zu beschäftigen was kann ich tun um eben meine Umwelt ordentlich zu behandeln und ich habe auch letztes Jahr auf dem Kongress schon einen kleinen Vortrag gehalten zu dem Thema habe aber mich hauptsächlich an Büro orientiert also an der Büro abläufen was jetzt dieses Jahr glaube ich nicht so sinnvoll ist weil die meisten von euch halt nicht im Büro sind bzw. im Home Office und deswegen habe ich mir überlegt ich fange einfach mit so einem Tagesablauf an und versuche mal mich daran entlang zu handeln und verschiedene Beispiele zu finden was ihr so ganz einfach und niederschwellig machen könnt um eure Umwelt besser zu machen ich möchte nochmal vorausschicken dass es so bestimmte Ansätze gibt die ich halt verfolge und dann nicht die mich mich immer so entlang hangeln das ist einmal der große Topik vermeiden das zweite ist reduzieren das dritte ist wieder verwerten für das reparieren fünftes Recycling und als letztes kompostieren wobei der wichtigste Punkt für mich ist zu vermeiden wenn ich also jetzt am frühen morgen anfange nachdem ich auf gestanden bin geht es ja mit der Hygiene im Bad los und da haben wir momentan das Problem dass ihr die meisten von euch wahrscheinlich irgendwelche Plastik Verpackungen benutzen um ihr Shampoo ihr Duschgel etc. zu haben um sich zu waschen ich kann euch empfehlen auf Seife umzusteigen das ist ein einfaches Produkt was ich größtenteils auch unverpackt kaufen kann oder zumindest in Verpackungen kaufen kann die nicht so viel Müll produzieren was zum Thema Shampoo zu sagen ist dass man da auch Seife benutzen kann das ist dann ein bisschen trickier weil man erst mal umsteigen muss ihr dürft nicht vergessen dass in den meisten Shampoos relativ viel Mikro Plastik Tenzide oder Silikone drin sind die eure Haare auf eine bestimmte Art und Weise umhand und mandeln und der Umstieg auf Seife wird dann eine Weile dauern also sprich wenn ihr euch eine Haarseife kauft dann wird es halt eine Weile dauern bis euer Haar sich daran gewöhnt hat das heißt man muss da ein bisschen geduldig sein und ich möchte nicht verschweigen dass die Menschen die mit kalkhaltigen Wasser leben müssen das Problem haben dass sie dann mit einer sogenannten sauren Rinse nachspülen müssen also sprich ich nehme ungefähr ein Esslöffel Apfelessig oder eben auch Zitronensäure ich habe mit Apfelessig bessere Erfahrungen gemacht und bringe das in ungefähr 500 Milliliter Wasser und nach dem Waschen mit der Seife wird es einfach damit gespült und man kann die Haare dann eben danach nochmal mit Wasser nachspülen oder es einfach drin lassen der Essiggeruch verflickt also innerhalb von Minuten wenn die Haare dann trocknen was noch zu Zähne putzen zu sagen ist es ist ein etwas schwierigeres Thema da habe ich ziemlich lange dran rumgepuppelt um eine Lösung zu finden die relativ einfach und niederschwellig ist und ich bin mit den Ergebnissen die ich da erzielt habe noch nicht so zufrieden ich spreche jetzt davon dass man eben die Zahnpasta die natürlich meistens mit Mikroplastik also weil da irgendwelche abrasiven Bestandteile in der Zahnpassertube drin sind und die Zahnpassertube natürlich aus Plastik ist dass man die benutzt es gibt Zahnputztabletten und Zahnputzpulver was man benutzen kann und das ist schon finde ich ein relativ heftiger Schritt raus zu gehen weil die Zahnputztabletten man erst mal so ein bisschen so ein Workflow finden muss wie man damit umgeht das ist auf jeden Fall noch ausbaufähig und ich hoffe dass es da auch noch mehr gibt also ich habe zum Beispiel auch versucht mir selber Zahnpasta herzustellen indem ich Kokosöl genommen habe und Zahnputzpulver da reingerührt habe es ist kein schönes Ergebnis also wie gesagt Zahnputz Tabletten nicht besonders toll aber benutzbar was das Problem bei den Zahnwürsten ist dass man einfach schlecht Zahnwürsten bekommt die entweder kein Plastik oder wenig Plastik beziehungsweise nachhaltige Rohstoffe oder sowas enthalten es gibt Bambus Zahnwürsten seit einer ganzen weile die teste ich gerade noch da kann ich nicht so viel dazu sagen ich bin semi zufrieden damit es gibt auch einen speziellen Ast sozusagen die man benutzen kann das heißt Mieswack das wird dann die Rinde abgeschnitten und man kann es dann benutzen als Bürste das Problem dabei ist dass es sich nicht biegen lässt sprich im Klartext während kleinen Mund hat wie ich kann sich nicht so besonders gut die Zähne putzen und erreicht einfach die Stellen nicht die wichtig sind der nächste punkt ist das die die meisten die sind natürlich nach wie vor in Plastik Dosen gerade die Roll-Owns oder Sprays etc. da kann ich sehr gut empfehlen mit Deokreme zu arbeiten die ist eben meistens in kleinen Dosen zu haben die man wieder verwenden kann die Deokremes haben eigentlich bei uns einen sehr guten Erfolg erzielt ich konnte selbst meinen Sohn davon zu überzeugen dass es benutzt sollte ich vielleicht nicht sagen dann was als nächstes ist wenn ich mich dann hinterher einkriegen möchte habe ich die ganzen Lotionen und irgendwelche Cremes die es so gibt die sind natürlich auch alle in Plastik Verpackungen und sind relativ easy zu ersetzen es gibt sogenannte Körperbutter oder Bodymails und tatsächlich habe ich die Erfahrung gemacht dass man diesen Körperbuttern oder den Bodymails den Vorteil hat dass die Haut auch nicht mehr so viel Creme braucht also ich habe leider unter sehr trockener Haut und brauche eigentlich normalerweise sehr viel Creme und seit ich auf Bodymails und Körperbutter umgestiegen bin ist es einfach wesentlich besser geworden die bekommt man auch sehr gut günstig zu kaufen in einschlägigen Naturkostläden oder Bio Supermärkten oder sowas aber mittlerweile gibt es die auch tatsächlich habe sie schon gesehen beim normalen Drogeriemarkt was zum Thema Peeling zu sagen ist ist das zum Beispiel Reisbruch sehr gutes Peeling erzeugt und relativ einfach zu benutzen ist dann nochmal jetzt als Menschen die sich schminken möchten und das hinterher wieder abkriegen möchten ist zu sagen diese ganzen Wattepads was es da so gibt ist natürlich nicht besonders nachhaltig und es gibt mittlerweile bei kleineren Shops oder kleineren Erzeugern zum Beispiel gehäkelte oder gestrickte kleine Pads die man dann eben waschen kann und damit eben auch wieder verwenden der nächste Punkt zu dem ich kommen möchte ist dann das Frühstück das ist nur ein kleiner Punkt erst mal da geht es dann schon in zwei verschiedene Richtungen es geht einmal darum wenn ich zum Bäcker gehe und mir da was hole zu versuchen zu vermeiden dass ich mir dann extra eine Bäckertüte geben lasse es gibt ja momentan den Trend in den Bio Supermärkten oder überhaupt in den Supermärkten auf Papiertüten umzusteigen und man muss halt einfach ganz klar sagen eine Papiertüte ist nicht besser als eine Plastiktüte im Gegenteil sie braucht noch mehr Wasser in der Produktion und ich muss Holz dafür benutzen das sind alles Sachen die sind nicht gut das heißt im Klartext es ist auf jeden Fall sinnvoll dahin zu gehen dass man alle Tüten so oft wie es irgendwie geht wiederverwendet also bei uns ist es wirklich so dass die Plastiktüten die wir verwenden auch noch mal kurz getreten und kurz abgerputzt werden beziehungsweise es wird auch ganz klar drauf geschrieben und unterschieden was verwende ich das ist für die Kartoffeln das ist für die Möhren das ist für die Brötchen etc. so dass man von vornherein dann schon immer einfach eine Brötchentüte dabei hat und dann kann man einfach Brötchen mitnehmen es ist noch schwierig in den Läden weil die Menschen teilweise sehr wie ich kann ihnen das jetzt nicht so geben gerade in den Pandemiezeiten es wahrscheinlich dann erst dann auch wieder sinnvoll da weiter vorzugehen wenn ihr Tee oder Kaffee und gerade bei Kaffee to go kauft ist es halt sinnvoll zu gucken dass ihr wirklich ein Pfandbecher benutzt ein mitbringt es gibt auch schon einige Supermärkte die haben Pfandsysteme wo man sich ein Becher für ein Euro oder so kaufen kann und den dann wieder verwenden kann ansonsten zum Thema weiteres essen also wenn man keine Brötchen oder sowas zum Frühstück ist es ist natürlich sinnvoll zu sagen um den Tag möglichst lange noch bis zum Mittagessen raus zu zögern und nicht irgendwie dann gleich schon wieder dazwischen drin was Süßes zu snacken sie sind vollen drei oder ein Müsli zum Frühstück zu essen mit zum Beispiel Nüssen oder getrockneten Obst oder frischen Obst je nachdem so dann jetzt zum arbeiten also wenn ich dann gefrischstückt habe und zum arbeiten entweder ins Büro gehe oder eben zu Hause arbeite was sehr gut hilft und was bestimmt alle von euch schon gehört haben ist die Heizung um einen grad runter zu drehen das ist erstmal gewöhnungsbedürftig und man muss sich dafür definitiv einfach mal ein bisschen wärmer anziehen und das sage ich jetzt wo ich hier auf der siebe es stehe mit tausend offenen fenstern um eben der pandemie für rechnung zu zollen und es hat irgendwie keine ahnung zehn grad oder so und ich habe einfach zwei pulver an also man gewöhnt sich dran und vor allen Dingen was ich jedem empfehlen möchte ist dann wirklich steht einfach jede stunde auf und bewegt euch tatsächlich einfach mit rausgehen etc dann hat man auch nicht mehr dieses krasse frier feeling so dass man irgendwie das gefühl hat ich ja frier als gleich wenn ich drei schunden am tisch sitze und mich nicht bewege friere ich natürlich wesentlich leichter und schneller was ich noch empfehlen kann ist halt auf jeden fall den stromverbrauch zu reduzieren ich meine das ist das größte problem und ich glaube das ist für uns alle die viel am rechner sitzen und viel da machen die größte schwierigkeit und die größte herausforderung was ich halt sagen kann was ich gemacht habe ist einfach zu versuchen dass man viele geräte nicht auf stand by lässt sondern halt wirklich einfach ausmacht also speziell abends oder bei mittagessen wenn man weiß dass man eine stunde pause macht dann kann man den rechner einfach tatsächlich aus schalten oder zumindest auch den monitor was immer wieder auch vergessen wird oder was glaube ich ganz schwierig ist für viele daran zu denken so wenn ich das licht überall brennen lasse dann brauche ich mich auch nicht wundern dass ich dann halt den nachteil habe dass es einfach mehr strom verbraucht also wir haben zum beispiel tatsächlich jetzt in der weihnachtszeit sein paar jahren nur noch solar lichter die wir dann eben aufstellen und dann haben wir licht von der sonne und müssen kein extra licht für die lichterkette zu weihnachten aufbauen genau was noch ein großes thema ist beim arbeiten ist einfach wenn ich kopiere oder drucker erstens natürlich immer überlegen muss irgendwas wirklich gedruckt werden kann ich das nicht irgendwie mit einem digitalen tool bearbeiten es gibt so viele tolle tools die man benutzen kann wo man auch seine wie meine etwas technikfeindlichere chefin dafür überzeugen kann dass man diese dinge benutzt und wenn ich schon ausdrucker dann benutzen wir zum beispiel hauptsächlich recycling papier was natürlich so ein bisschen schwieriges thema ist weil die menschen die die drucker warten mir schon öfter gesagt haben dass die drucker darunter leiden ich konnte jetzt noch nicht verifizieren aber dann bleibe ich auf jeden fall dran das ist so die frage leidet der drucker eben unter dem recycling papier oder nicht dann natürlich das papier beidseitig bedrucken schwarz-weiß und wenn es geht also wenn ich sachen habe die größer sind zwei seiten auf eine druckseite dann was wir noch machen ist einfach alles bedruckte papier was nicht dsg vo konform gelagert werden muss kann man eben auf der freien seite als schmierpapier benutzen das hat auch noch mal den vorteil dass man also bei uns im büro ist es so alle noch mal genauer darauf achten was ist denn dsg vo konform und was nicht nun mal so nebenbei was noch ein punkt ist beim arbeiten ist halt zu überlegen so brauche ich jetzt das neueste gerät kann ich das vielleicht gebraucht kaufen kann ich es noch mal reparieren lohnt es sich und hat nicht immer den heißesten scheiß sich in die bude zu stellen das nur so nebenbei dann wenn ich zum mittagessen komme ist natürlich ganz klar ich versuche selbst zu kochen jetzt kommt natürlich der einwand aber wenn ich alleine bin dann lohnt sich das doch nicht dann ist es doch viel schlechter als wenn ich mir was zu essen hole das auf jeden fall was ich halt empfehle ist im prinzip vor zu planen und vorzubereiten und tatsächlich sich hinzustellen und zu sagen okay ist es samstag sonntag dann koche ich was schönes für mich und mache mir einfach mehr dann lohnt sich das viel mehr der verbrauch den ich an strom oder an gas habe lohnt sich einfach mehr und dann kann ich mir halt überlegen so okay gut ich friere dann einfach ein teil ein ok einfrieren ist jetzt nicht auch nicht gerade die energie ärmste art und weise was zu konservieren ich muss ehrlich zugeben ich habe versucht zum beispiel zu kinisuppe oder sowas einzu kochen und dann in gläser zu verschließen das ist nicht so einfach also ich würde empfehlen weil das einfach auch niederschwelliger ist die dinge erst mal einzufrieren in kleineren dosen so dass ich das dann mitnehmen kann wenn ich ins büro gehe oder wenn ich eben zu hause bin dass ich das machen kann was ich noch empfehlen kann ist auf jeden fall wenn ich schon irgendwo hingehe und mir was zu essen hole was ja momentan eher schwierig ist aber auch machbar dass ich dann halt ein behälter dabei habe jetzt in der pandemie situation sagen natürlich viele nö das geht nicht ich kann dir nichts in dein behälter machen no way wir werden auch wieder andere zeiten erleben und dann geht das wahrscheinlich wieder und ansonsten was ich noch empfehlen kann ist auf jeden fall sicherheit möglichst noch mal frisches opst mitzunehmen oder auch getrocknetes opst um das dazwischen drin zu essen oder eben am ende des mittags essen tatsächlich den süßkram zu futtern das ist erstens gesunder für die zähne und zweitens hast du dann später diesen hiper nicht mehr dass dann irgendwie zwei stunden nach mittag essen so ich muss jetzt unbedingt noch was süßes hinterher schieben was wie gesagt es ist halt extrem wichtig sich zu überlegen was koche ich was mache ich und was mit mittag essen noch wichtig ist es einfach zu sagen versucht wirklich saisonal zu kochen und am besten wenn es geht regional natürlich oder eben auch bio ich weiß viele wenn jetzt sagen saisonal dann gibt es im winter nur noch kol das ist tatsächlich nicht der fall sondern ich kann im winter auch noch mal Sachen bekommen die tatsächlich jetzt auch wachsen auch durch den lustigen klimawandel den wir so haben wo dann einfach Sachen wachsen die sonst nicht mehr im winter gewachsen sind also es gibt feldsalat es gibt auch noch mal verschiedene kohlzorten zum beispiel ganz klar und dazu natürlich kann ich eingeweckte sachen holen oder was man auch noch machen kann ist gut wenn man fermentiert aber das ist schon wieder ein bisschen advance da und nicht so niederschwellig aber es gibt mittlerweile tatsächlich in den supermärkten auch fermentierte sachen zu kaufen so ja das sind so vor dem tagesablauf entlang was ich an regelmäßigen dinging habe bei der ich noch mal darauf achten muss was ich so konsumiere und darum geht zum endeffekt es geht immer ums konsumieren das ist lebensmittel und drogeri artikel wenn ich die kaufe also wie gesagt was mitnehmen ich nehme meine stofftüten mit ich nehme eine papiertüte mit ich habe zum beispiel den vorteil dass es bei mir in der stadt drei unverpacktläden gibt ich dachte dass es eher so ungewöhnlicher und hat mich ein bisschen umgeguckt und ich habe festgestellt dass es sogar in kleineren orten unverpacktläden gibt und ich habe dann auch mit freunden gesprochen die mir erzählt haben sie haben das einfach mal im kleinen supermarkt um die ecke erzählt und der hat tatsächlich dann auch ein kleines fach eingerichtet wo man sich unverpackt lebensmittel holen kann und das finde ich schon ganz gut ansonsten kann ich halt empfehlen zu versuchen auf wochen erregte zu gehen beziehungsweise ihr seid bestimmt wenn ihr irgendwo hinfahrt irgendwo mal an bauenhof vorbeigefahren der in eurer nähe liegt wo jemand ein schild rausgestellt hat erdbeeren zu verkaufen krattoffeln zu verkaufen es lohnt sich tatsächlich dort nachzufragen und zu fragen sagt man habt ja nicht auch vielleicht noch mal ein bisschen lauch oder was anderes da sind schon ganz tolle sachen daraus entstanden was es noch gibt sind diese solavis also soziale landwirtschaften wo man sich für verschiedene also es gibt verschiedene möglichkeiten manche sagen du musst einen bestimmten preis bezahlen jeden monat und dann kannst du eine kiste bekommen mit lebensmitteln oder es gibt eben auch andere sachen die sagen du musst am Anfang des jahres und gewisse geldmenge einlegen und dann bekommst du eben dafür bestimmte sachen also da gibt es verschiedene möglichkeiten man muss einfach mit aufmerksamem auge durch die gegend gehen oder fahren vielleicht seht ihr irgendwas was euch anspricht was wir zum beispiel jetzt machen nochmal zum thema weihnachten viele da fällt ja wahnsinnig viel verpackungsmüll an weil eben alles schön eingepackt wird und dann wird es aufgerissen und wird alles weggeschmissen was ich empfehlen kann sind halt diese widerverweindbaren geschenke tüten bei uns in der familie zum beispiel kreisen die schon richtig rum ist ach die tüte habe ich ja vor fünf jahren mal der oma geschenkt jetzt kommt sie wieder zurück und man packt die sachen dort einfach ein und dann kann man die beziehungsweise pack liegt sie einfach rein und gibt sie dann weiter das ist auf jeden fall eine möglichkeit um eine ganze menge müll zu sparen der nächste punkt ist nochmal Kleidung und konsumgüter also wie gesagt ein bürsterkonsum ist tatsächlich wichtig also überlegt euch brauche ich die sachen wirklich ist es nicht vielleicht möglich das gebrauch zu bekommen oder kann ich es vielleicht eventuell ausleihen wir haben zum beispiel mittlerweile das so dass tatsächlich eine bommaschine in fünf verschiedenen familien immer kreist also bleibt einfach bei der person die sie zuletzt gebraucht hat und wenn man sie braucht dann sagt man bei der person bescheid hey du ich bräuchte die bommaschine mal wieder kannst du mir die mal wieder zurückgeben und dann benutzt man die wieder und gibt es weiter genauso mit einem steckschlüssel satz oder sonst irgendwelchen sachen also das funktioniert relativ gut solche sachen einfach auszuleihen und dann weiterzugeben und da so eine kleine klein kreis zu bilden was beim kaufen von dinging also gerade wenn es um Kleidung geht das ist natürlich ein ganz ganz schwieriges thema also wie gesagt möglichst gebraucht kaufen wäre am besten ansonsten wenn ihr was neues kauft würde ich halt immer mir überlegen bin ich 200 prozent davon überzeugt oder nicht also will ich das unbedingt haben oder nicht passt es mir wirklich 100 prozent oder ist da so ein kleines aber wenn ein kleines aber ist sein lassen es gibt da untersuchungen zu glücksgefühl beim kaufen und ja es ist halt nur ein kurzfristiges glücksgefühl was ich da erzeuge und im endeffekt geht es darum vielleicht eher was immer dreherles zu machen was man noch machen kann um nicht gleich immer alle möglichen sachen zu kaufen wenn man seine karte zückt ist halt bar zu zahlen das bringt auch noch mal einen ganzen punkt weil die hemschwelle größer ist sachen einzukaufen wenn man bargeld ausgibt tatsächlich da gibt es auch studieren zu was ich noch empfehlen kann ist für sich selber so einen sogenannten kaufnichtstag zu machen also wirklich zu sagen so okay da kaufe ich jetzt nix ich werde es einfach knallhart durchziehen und das hilft hat ganz viel weiter also wir haben mittlerweile nur noch kauftage und keine kaufnicht tag mehr und kaufen einfach nur noch zwei bis dreimal im jahr wirklich gezielt dinge ein die wir vorher auf eine wunschliste geschrieben haben also es ist auch sinnvoll tatsächlich sich für konsumgüter oder so eine wunschliste zu schreiben und sich genau aufzuschreiben was will ich wirklich haben was brauche ich wirklich was ist wirklich notwendig und dann zu gucken dass man das eben entweder gebraucht oder ausgeliehen bekommt und es dann oder ob es wirklich gekauft werden muss dann was auch noch ein thema ist ist die fahrten also wenn ich irgendwo hinfahren muss ich weiß dass menschen die auf land wohnen meistens keine option haben außer das auto zu benutzen weil einfach die fahrzeiten der busse unmöglich sind etc. kenne das was ich empfehlen kann nach pandemiezeiten ist eben fahrgemeinschaften zu bilden also wirklich zu gucken so fährt denn jetzt der gleiche menschlich kollege von mir ich weiß dass der im nachbardorf wohnt können wir uns nicht abwechseln und man nimmt ihn dann mit oder die person nimmt einen mit das wäre zum beispiel noch die überlegung ansonsten öpnv oder fahrrad das ist natürlich ganz klar öpnv ich weiß die meisten von euch werden sagen im pandemie in pandemiezeiten setze ich da keinen fuß rein kann ich verstehen mache ich auch möglichst selten oder ich suche mir zeiten aus in denen ich weiß das möglichst wenig menschen unterwegs sind trotzdem würde ich das empfehlen noch mal im hinterkopf zu behalten sicher zu überlegen muss ich jetzt unbedingt zum supermarkt der um die ecke ist oder so mit dem auto hinfahren oder kann ich das nicht vielleicht auch mit verrat machen dann noch mal zu einem ganz anderen thema und zwar reinigen putzen und pflegen ich muss ja in der wohnung auch putzen oder ich muss auch meine kleidung waschen das ist zum beispiel auch noch mal eine geschichte die etwas schwierig ist ich habe bei den waschmitteln relativ viel ausprobiert und festgestellt dass dieser ganze kram den ein die industrie so verkaufen möchte echt gar nicht notwendig ist also ich wasche zum beispiel die 40 grad bunte wäsche mit seifenkraut waschmittel was ich im fünflister kann im fünflitter kanister kaufen kann und ansonsten wasche ich einfach mit e-fol blättern die werden kleingeschnitten in säckchen gepackt und einfach mit gewaschen das funktioniert hervorragend macht die wäsche super sauber und ist überhaupt kein thema ich hatte auch öfter mal was gelesen von kasanien waschmittel aber das ist ehrlich gesagt mir zu aufwendig herzustellen und du musst es relativ schnell verbrauchen also davon würde ich eher abraten und e-fol blätter kriegst du immer und überall auch im winter dann beim putzen ist es so ein bisschen zweischneidiges schwert also es gibt auf der einen seite habe ich soda in wasser aufgelöst und in sprayflasche gepackt und versuch damit klarzukommen das funktioniert so semi bin ich so ganz hundertprozentig überzeugt davon wir haben einfach im unverpackt laden ein konzentrat dann gekauft und das immer wieder verdünnt das funktioniert relativ gut ansonsten für die toilette in der echten neural gescher punkt ist wo ganz viele sagen nee da brauche ich einen scharfen reiniger sonst wird es nicht sauber kann ich empfehlen den urinstein kriegt man sehr sehr leicht mit zitronen säure pulver weg das ist wirklich super easy total einfach zu bekommen auch unverpackt zu bekommen und ist sehr wirkungsvoll was wir sonst gemacht haben ist halt essig essenz einfach verdünnt und damit dann geputzt teilweise dann orangenschalen mit rein damit es ein bisschen netter riecht und weiter genau in der küche zum beispiel bin ich dann eine ganze weile dazu übergegangen dass ich kartoffelschalen zum putzen von edelstahl benutze weil das ist stärker drin und das macht die edelstahl spüle super sauber und tatsächlich hält die viel länger sauber als bei all diesen ganzen anderen komischen reinigern die es da so gibt das ist sehr erstaunlich aber auch interessant so dann bin ich im prinzip schon mit den einfachen dinge in am ende ich wollte noch mal sagen dass diese ansätze die ich halt benutze die würde ich halt jeden noch mal ans herz legen also ganz groß vermeiden als nächsten schritt reduzieren und dann der schritt wiederverwerten also leihen kreisläufe schaffen sharing etc. reparieren sprich in repair cafes gehen oder selber reparieren recyceln eher kleiner war als was ich recyceln muss ist immer schwierig weil das momentan problematisch ist dinge zu recyceln ihr wisst ja selber dass viele viele von unserem mülle einfach verbrannt wird und nicht recycelt wird also deswegen immer der hohem haupt topic ist vermeiden und kompostieren ist natürlich net aber für viele in der stadt zum beispiel aufgrund von ungeziefer befall nicht möglich und da musste man sich halt dann auch noch mal überlegen ob man sich vielleicht so ein kleines bocacchi oder sowas anschafft aber das ist auch schon wieder etwas advanced und für die advanced dinge kann ich euch sagen dass ich am 29.12. hier auch auf dem stream irgendwo dann um 15 Uhr bitte guckt noch mal bei der sieben ist nach noch mal einen kleinen workshopmacher wo ihr ganz viele fragen stellen könnt und euch austauschen könnt über ein big blue button channel auf jeden fall fragen könnt ihr gerne auch über mein profil noch mal reinpacken oder eben über unseren telegram channel was ich noch sagen wollte ganz abschließend ist das ganze ist ein prozess und kein wettbewerb also es geht jetzt nicht darum loszugehen und irgendwie sofort alles zu machen sondern es ist halt es geht darum stritt für schritt ein gewissen konsumverzicht zu leisten es geht darum wirklich auch auf unsere bequemlichkeit tatsächlich ein stück für stück zu verzichten das ist halt der punkt aber es ist einfach so fangt einfach an probiert aus und guckt was für euch passt und was für euch nicht passt das ist vielen dank euch ich weiß nicht ob jetzt noch fragen kommen ich glaube nicht man hört dich nicht heute kann man mich jetzt hören ja super also zumindest zwei sachen habe ich noch die du nicht erwähnt hast dass ich kenne das aus indien die waschnüsse die sollen wohl auch sehr gut sein und werden dort eingesetzt wäre auch ein biologischer ersatz für waschmittel zumindest wenn die verschmutzung nicht so groß sind und zum anderen ich habe eine ganze zeit lang mir fällt der name nicht ein in indien wird es auch genutzt zum zähne sauber machen nämlich das ist so eine art holz auf dem man rum kaut das wirkt wirklich sehr gut und macht die zähne vor allen Dingen schön weiß also vielleicht für die schönheits fanatiker unter euch das ist wirklich mal gewitter try also ich kann nur sagen zu den waschnüssen das problem ist dass die waschnüsse aus indien kommen und dort den leuten also im prinzip haben die dort angefangen das zeug zu exportieren und da sie es exportieren haben sie selber nicht mehr zur verfügung und dafür benutzen sie dann eben diesen ganzen waschmittel müll der bei uns produziert wird in plastik etc mit vielen tensiden und verschmutzen dort ihre umwelt deswegen habe ich das bewusst nicht erwähnt also waschnüsse ist keine alternative in meinen augen einfach aufgrund dessen dass es den menschen dort weggenommen wird im prinzip mit dem misswakt das habe ich auch vorhin schon erwähnt wie gesagt es ist cool aber hat den nachteil für menschen die kleine münder haben dass man die halt schwer benutzen kann weil der nicht gebogen ist also dass es halt misswakt ist halt wirklich zum drauf kauen ich habe auch schon mit gras halben beim campen meine zähne sauber gemacht geht auch also aber da bin ich noch nicht so ganz durch was es da noch so viele möglichkeiten gibt vielleicht habt ihr noch was dann kann man ja noch mal gucken genau ja vielen dank
Verschiedenste Lebensbereiche werden betrachtet und einfache Beispiele dargestellt um Möglichkeiten zur Veränderung aufzuzeigen. Es gibt viele kompetente Äußerungen zum Thema "Handeln gegen den Klimawandel", doch mir ist aufgefallen, dass es kaum kompakt zusammengefasste, einfach umzusetzende Beispiele aus der Praxis gibt. Internetportale bieten selbstverständlich viele Informationen, die mit Zeitaufwand und Ausprobieren sehr hilfreich sein können. Doch mensch kann sich leicht verzetteln oder auf Grund der Fülle und dem vorausgesetzten Wissen, schnell die Motivation verlieren. Deswegen möchte ich Möglichkeiten aufzeigen, die einfach umzusetzen sind. Diese Praxisbeispiele sind aus fast allen Lebensbereichen, so nimmt z.B. unsere Nutzung von elektronischen Geräten großen Raum ein. Hier fällt uns der Verzicht und die Veränderung des Verhaltens schwer – verständlicherweise. Unser Konsumverhalten ist nicht nur an dieser Stelle der Knackpunkt. Ernährung, Reisen, Körperpflege etc. können ressourcenschonender gelebt werden. Das lässt sich allerdings nicht ohne einen teilweisen Verzicht auf unsere bisherige Bequemlichkeit umsetzen. Dabei stellt sich die Frage nach der Reflexion unserer bisherigen Gewohnheiten, des Willens zur Veränderung und eines wirklichen Anfangs. Hier möchte ich mit der Vorstellung niederschwelliger Beispiele aus der Praxis Einstiege bieten.
10.5446/51882 (DOI)
Willkommen auf das Siebel, es wird rein. Na, ich hoffe aus dem guten Rand, habe ich nicht zu viel Package lost. Wie alle wahrscheinlich auch oder die meisten von heute bin ich in meinem Wohnhof gefangen und möchte heute ein bisschen über Home-Automatisierung erzählen, angefangen von Einsteigerlevel bis hin für die Hardcore-Nurstie deutlich tiefer und deutlich mehr damit machen wollen als die Standardanwender und dazu habe ich euch heute einen kleinen Mini-Talk plus im Anschluss noch einen Workshop vorbereitet, den ich gerne mit euch machen würde. Eingetragen war das alles jetzt leider auf Englisch, obwohl ich es auf Deutsch machen wollte. Ich würde mich jetzt in Englisch durchsteuern, ich hoffe, das ist okay. Und so wir starten zu switchen in die Englische und ich starte meine Slides. Wir gehen nach den Slides in ein bisschen Workshop und mit euch zusammen hoffentlich. Ja, und wir haben hier ein paar Spaß auf den Devices. So, was über mich geht, mein Name ist André Helvich und mein Name ist Tiko. Ich war in den 80ern in der Schweiz von Berlin und ich bin ein Mitglieder der C-Base seit 2001 und seit ein paar Jahren auch auf der Bord der C-Base. Ja, ich arbeite als System- und Cloud-Engineer, also das bedeutet, ich habe sehr viele Dinge zu tun mit Automationen und Home-Automation ist mein persönlicher Favorit hier. Ja, ich bin ein Tiko Hecker, der nur seine Dinge bringt. Ich habe nicht mehr considert fuse es im Aztec Mohammed in two thousand fourteen zutste Quatsch etwas einfaches awayتENg t curvature Copierabend massieren und Eddie ver externes about home automation in general, some hardware aspects, some software aspects. I would like to do some definitions and wording, because this can be getting very confusing later. Then after this small talk, we will have hands on. Hopefully we have one and a half hour or more around by in-flight. We will integrate together. If you already have existing hardware that is capable to be integrated, we can do this. Or if you don't have any hardware yet, don't be sad. We can fake them or find other interesting digital stuff that we can integrate into your home automation. We will chat about automation in general, share our experience with each other. Smart what? What means smart or what people think is smart is a really huge web usually. Because people think it's not the thing, most stuff is plug and play. Everything you need, you can get from a single vendor. But this is not true. We are far away from this. If you really want to get things smart, you have to choose different vendors. And we have tons of them. Everyone has another kind of implementation somehow. Some of them are also locked into the cloud. So, then you have to free the firmware somehow or free your device from this evil firmware that by default connects to the Neuland. And yeah, for sure, you have incompatibilities between different vendors. And they don't like it if you integrate stuff from other vendors into their solutions. That makes everything more complicated than it should be. So, yeah, how to start? Usually people get started quite easy and simple as I started with some simple light bulbs. These Philips Hue are very common or Ospream Lightify. In the beginning, it's okay for them if they only do controlling the lights or they want to see the status. It means in the beginning it's okay for you if you turn on your lights, changing the color or see from remote if it's on or off. When you leave the house quickly and on the way to somewhere and take a look on your mobile to see if you forgot to change some lights off or I don't know what. This is for the beginning quite simple and you can start very easily with this. So, what you get out of the box in the beginning from the vendors directly. I'm staying at these light bulbs because this is the most easy example to show you what the vendors does or what the vendors give you. And yeah, an easy way to get started somewhere. Later we will try to find some more interesting examples. But these are reserved for the workshop. So, if you want to get in touch and talk about more specific things like the same or the same sensors, etc. This will be done later. In the morning you wake up. It's quite nice if you do it. I want to wake up at 6 o'clock. So start in the lights. Ten early Kims. Okay, Teco, du hast massive Bandbreitenprobleme anscheinend. Der Sound ist hier sehr, sehr schlecht gerade. Man versteht es wirklich nicht. Ich würde sagen, du guckst mal kurz in der Sound-Entstellung oder sagst, dass die Kinder das aufhören sollen zu spielen. Okay, ladies and gentlemen, das nächste Ding ist live. Und der Bandbreit von Teco ist noch irgendwo, ich weiß nicht. Ja, was wir jetzt tun. Ich denke, der Gespräch ist gut und wir werden nur ein paar Sekunden oder ein Minutes warten, damit wir vielleicht wieder beginnen können. Ja, ich kann jetzt ein T-Code für Teco sehen und hoffentlich wird es besser gehen. Was ist passiert? Einen T-Code für Teco. Kann ich noch weiter? Ist es besser? Ab wann war das abgehackt ungefähr? Danke. Gibt es eine Es beginnt, die % der Lichterung über ein paar Minuten zu erhöhen. Wenn man sich auf 6 Minuten, 10 Minuten vor 6 Minuten, beginnt es, die Lichterung zu erhöhen. 10 % pro Minute. Und nach 10 Minuten, die Lichterung ist auf 100 %. Und man hat einen sehr schmutzigen Weg. Das ist schön. Aber das ist nicht was, was ich verstehe von smarten Dingen. Das ist nur, ja, erhöhen oder dämmen die Lichterung über einen bestimmten Teil des Tages. Ein anderes Ding, das aus der Box kam, ist ein Delape. Schatten auf die Zeit, wenn du zu schlecht gehen oder so. Und du bist weit weg von deinem Digital-Switch. Du hast den Button aufgedreht und gesagt, Schatten in 10 Minuten. Und nach 10 Minuten, die Lichterung ist auf. Ein anderes Ding, das aus der Box kam, wenn du supportierter Hardware bist, ist die Lichterung auf und auf durch die Bewegung, die man aufgeteilt hat. In general, du hast nur die Bewegungssensor. Und diese Bewegungssensor hat die Lichterung aufgeteilt. Und nach einigen Minuten, wenn kein Bewegung aufgeteilt wurde, wird es wieder aufgeteilt. Das ist schön. Wenn du durch eine Runde gehen, wo du passen willst und die Lichterung ohne was zu tun, ist das völlig automatisch. Das ist schön. Das ist ziemlich schmarrt. Oder ich denke, das könnte schmarrt sein. Aber das Problem ist, dass es nur mit supportierter Hardware funktioniert, wenn du auf einer Wendau stehst. So, dass es nicht immer die Bewegungsdetektoren gibt, sondern die haben sie, aber die sind sehr spät. Was ich auf meinem persönlichen Wunschliste für etwas mehr schmarrt, ist, dass man mehr Dinge als nur Lichterung mit der Bewegungssensor kombinieren kann. Ich habe eine Wendau-Aufgabe, so dass ich die Wendau auf oder nicht aufgeseilt habe. Und wenn eine Wendau mehr als 15 Minuten aufgesehen ist, weil das genug ist, um frische Luft in die Runde zu bekommen, beginnt ein Linkenindikator in der Wohnzahne, für Beispiel, um mich zu notieren, dass ich diese Wendau auf die Runde schmarrt, oder eine Message zu meinem Mobile, über Telegram, PUSH-Message, was auch immer, du nimmst es. Und ich möchte wissen, dass ich die Wendau näher habe. Ja, du kannst es. Oder ich würde auch unsupportiertes Hardware implementieren. Ich würde gerne mit Bewegungssensoren kombinieren, die ich mit mir selbst verbinde, weil man diese Entwicklung mit einem Wireless-Chip auf den Bord mit einem Bewegungssensor auf die Runde schmarrt, und einfach diese für deine Bewegungsdetektion benutzen. Und du kannst viel Geld save, statt die Wendau von selbst zu kaufen, sehr öfter zu verwenden. Aber es wird nicht so gut passieren, weil du die Wendau auf die Runde schmarrt, und die Wendau nicht mehr benutzen. Also ist das ein easy piece of cake? Ja, aber nein. Ich würde sagen, es hat auch auf das, was du wirklich willst. Wenn du einfach etwas, was du willst, kannst du das aus der Box mit Wendau machen. Aber wenn du willst, dass du oder couldn't do stuff like this, dann künftig ist es jetzt...?), wie mit Best eine Plattealalette in der Webseite, überhaupt immer. Anna Martin hat da einen Spotjunger, der�t ihr mit Wir müssen uns das Geld von uns anschauen. Wir werden das später machen. Jetzt sprechen wir ein bisschen über Sensors, Actors, was der Fakt ist. Ja, Sensors. Ich habe schon ein paar Vorbildungen gegeben. Wir haben, für Beispiel, Bewegungssensoren. Wir können Temperaturen, die Lichter und die Luft bemerken. Ja, was interessant ist, kann auch bemerkt werden. Zum Beispiel, die actuale Bitcoin-Price. Für die Leute, die ich gehört habe, ist es interessant, die actuale Bitcoin-Price zu wissen. Und mit Smart Home Automation kann man auch Notifikationen zu TV oder so geben, um euch Informationen zu geben, um die actuale Bitcoin-Price zu haben. Ja, CPU-Lode ist ein Beispiel, um euch zu zeigen, was ihr als Sensors benutzt. Weil ihr jedes Webpages die Schellkommandungen sendet, was ihr als Sensors willst. Und das Ergebnis kann dann auch für andere Dinge benutzt werden. Wir sprechen meistens über Home Assistant. Ich werde euch später ein paar andere Alternate zeigen. Was ich in der Vorbereitung gesehen habe, aber nicht gebraucht. Und das ist warum ich hier die Home Assistant-Integration der available Sensors, die aus dem Boxen kamen. Wenn ihr dort geht, findet ihr viele Sensors, die ihr auch benutzt. Auch die Sensors-Type REST, wo ihr die REST-API-Kohle sendet, zu wassamer, was ihr nennst. So, diese sind Sensors. Ihr habt auch Actors. Actors sind einfach Einstellungen. Wenn ihr startet, wollen wir die Heating Pump. Die Temperatur ist zu low, starte die Heating Pump oder starte die Heating, was das ist, das ist das Actor. Ihr könnt auch Powersocket switchen. Schatter Actors sind etwas mehr spezifisch, als nur auf und auf switchen. Weil man die Power für eine spezifische Zeit braucht, ist ein bisschen mehr als nur switchen, weil es ein weiterer Zeitspann ist. Und ja, Actors sind alles, was irgendwie switcht. Nächste Sache sind Automationen. Automationen, wenn ein Sensor in einem spezifischen Stadion etwas macht. Das ist ziemlich leicht. Wenn man den Quadraten auf kann, attache den parliamentär. Das ist das beruhigende ihrem較 Classic System. Das ist doch banned, denn die heisten werden mm³. insecurity, ein iT 어�g. zidegetime deixe ich haben verwenden, das ist komplett SorUD. So ist das die wie insane. Weil man eine Party machen kann oder ich weiß nicht. Diese sind Dinge, die man mit Automationen machen kann. Wir haben einen kurzen Blick auf die Types der Hardware, die man auf deinem Weg findet. Ich werde dir keine wirklichen Rekommendationsen geben, aber was ich in den letzten Jahren verwendet habe, und ich bin sicher, dass es immer mehr Exzellen gibt, alsолетierte Hygiene. Je mehr lootopedia schrieb, desto werfst du des Und andere sind mit diesen kleinen ESP-Geräten auf sie, um Wi-Fi zu nutzen. Dann kannst du einfach mit deinem Wi-Fi connectieren, und es ist ein Klient in deinem Wi-Fi. Und für alle, die auf AliExpress haben, finden Sie viele mehr Vendors, aber in general haben Sie es in den Zickbeeren, wie Bluetooth oder Wi-Fi. Das ist es. Ein weiteres Ding, das für uns sicherlich wichtig ist, ist die Switches. Für die Switches kannst du wieder, die nice Switches, die du auf eine Pläne auf die Wall putzen kannst, oder auf eine Magnetik, auf die man auf die Metall aufbaut, und es wird dort aufgeräumt, und einfach wie ein Remote benutzen. Xiaomi auch hat Switches, die auch für Zonoff und Schelly sind. Wenn du eine Voltage willst, oder eine Powerplug, kannst du diese von Zonoff oder Schelly kontrollieren. Und für sicher sind es auch viele andere Switches. Die Haltung ist interessant, wenn du ein Haus hast. Aber wenn du in einer Flasche leben, kann das auch gut sein, um dein Bauchstil zu heben. In der Früh, wenn du es nicht magst, ist es zu kalt, nur zwei oder drei Stunden, bevor du wachst, startest die Temperatur zu erhöhen. Das kann automatisch auch sein. Home Automatik ist für Unterflora-Heating, mainly, Pentado und AVM, für diese Generik-Heater, die die Wolle aufbaut, und kann kontrolliert werden, um die Wettbewerbe zu drehen. In general, kannst du auch eine Device, die somehow connectiert wird. Ihr poderst das Lesen aus�하시는BR-Seite. Es ist Madden하는데, die FOR by Bluetooth, Ich mag es auf einer einzelnen Page und eine generelle Überraschung über alles ohne......und wenn man die Lichter hat. Und man hat zwei Lichter von verschiedenen Bänder, die nicht mit sich anderen kompetent sind. Und man hat zwei von ihnen in einem einzelnen Raum. Und man möchte beide auf die same Zeit aktivieren. Es ist möglich, wenn man die App 1 öffnet, die Lichter auf hat, die Technik App öffnet, die Lichter auf hat. Ich glaube, niemand will das tun oder die andere Art ist, dass man sich an ein einzelner Bänder steckt. Aber das ist meistens nicht der Fall. Ja, die Sorgequote der Bänder ist usually closed. Ich bin sehr glücklich, wenn es eine custom Film ist, die von Hardcore Nerds gebaut wird, die neue Möglichkeiten zu der gleichen Hardware generieren. Und ich glaube, dass die Bänder nicht den Kasteln senden können. Ja, die Bänder haben auch XMN RestAPIs, aber meistens sind sie nur reverse Engenieher und nicht wirklich online dokumentiert. Also, was müssen wir die Missinggabe machen? Wir müssen inzwischen ein bisschen Glüh aufhören. Und das Glüh ist generell called Smart Hub. Und es gibt einige Lösungen für das, die ich jetzt zeigen möchte. Also, das erste Ding, das du vielleicht schon gehört hast, und die meisten Leute, die nach Hause zu Hause schauen, finden es. Oder wenn du es in deinem Leben für lange Zeit hast, dann bin ich sicher, dass du schon darüber gezwungen bist. Das erste Ding ist FHEM. In Deutschland ist freundlicher Hausautomation und Energiemessung. Und in Englisch ist es eine freundliche Hausautomation und Energiemeasur. Das war in 2005 und in Perl entwickelt. Aber ich muss sagen, Perl ist nicht meine persönliche Befragung. Man kann sehr viel mit dem Ding machen. Es ist ein ziemlich leckeres Licht. Es wäre eine Ersatzung, aber ich mag nicht alles in Perl. Wenn man Affinität in Perl hat, kann man es für sicher nutzen. Es ist schön. Es gibt viele Dinge, die man sehr viel machen muss. Wenn man etwas mehr exotisch ist, muss man immer auf seine eigene Korte schreiben. Ein anderes Ding, das existiert, ist Open Hub. Es war in 2010 entwickelt und ist in Java geschrieben. Ich weiß nicht, was du denkst, aber ich denke, Java ist nur eine große Messe. Meine persönliche Befragung ist immer, dass es so klein wie möglich ist. Es sollte auf einer Raspe oder ein kleineres Device sein. Für Open Hub ist es nicht möglich, auf einem so leckeren Device zu haben. Hier geht es. In 2013 war die Home Assistant in Python entwickelt. Das ist meine persönliche Befragung. Ich zeige dir in der zweiten Haube. Ein anderes Ding ist Iobroker. Das war in 2014 entwickelt und ist in Java geschrieben. Es gibt meist keine JS-Applikationen. Aber ich habe es noch nicht testet. Ich habe keine Erfahrung mit Iobroker, aber ich habe noch etwas erreicht. Ich mag nicht Java, aber das ist ein No-Go für mich persönlich. Ich mag in der Python-Welt viel mehr. Meine persönliche Befragung war in diesem Fall Home Assistant. Ich habe mit FHEM angefangen und hatte einen sehr, sehr, sehr kurzen Blick in den Open Hub. Aber seit einigen Jahren habe ich mit Home Assistant angefangen und es ist ziemlich einfach. Es gibt einige Gründe, warum. Zuerst ist es glatt oder leicht genug, auf einer Raspe oder Pi zu fahren. Es geht von Version 3B Plus, ich glaube. Es hat eine Fokus auf die Sicherheit. Es hat die Fokus auf die Daten zu halten, weil du nicht alle Home Automatiken in den Web verwendest. Ich möchte die Devices von den Clouds frei sein, nicht zu geben, sondern zu Hause zu haben. Es ist pure Open Source, in der Python, wie ich es dir vorhin gesagt habe, und es benutzt die Jamel-Syntax für die Konfiguration. Weil ich es bei Werk benutze, muss ich es immer mit dem Personal machen. Ich mag es sehr, wie man mit Home Assistant konfiguriert, und das ist warum ich es benutze. Es ist sehr leicht zu beginnen mit. In dem Beginn kannst du das System bei Home Automation sehr schnell machen. Aber später, wenn du mehr in Detail willst, kannst du auch viel mehr auf die Stimme und deine Stimme zu entwickeln. Aber diese sind die gleichen Dinge, die du mit FHGM mit tun kannst. Wenn du deine eigenen Sachen willst, kannst du das mit ihnen machen. Aber Home Assistant ist in der Mitte der ganzen Dinge. Es ist leicht zu beginnen, aber gut genug zu bekommen. Sie haben eine sehr schnelle Entwicklung und Erleichterzeit, sie haben eine sehr aktive Community. Ich habe nie Probleme mit den Updates in der Vorbereitung. Die guten Dinge bei Home Assistant sind sehr leicht zu beginnen. Die Documentation ist sehr toll. Für alle Modelle findest du einen guten Aufmerkmal, wo du die Konfigurationen finden. Wenn du etwas in deinem Kopf hast, ist das auch auf der Documentation-Page bemerkt. Es hat eine sehr gute Erfassung, Boris Wochner, irgendwo von den auf die Grafikkusier-Unterfassung, wie auch in den Konfigurationen von Jamel. Es ist sehr leicht zu maintainen. Wenn du die Updates brauchst, dann bekommen du Notifikationen. Du kannst die Updates einfach auf die Updates klicken, und in den letzten zwei Jahren habe ich keine Probleme mit dem. Ich habe auch mit Raspberry Pi 3B Plus zu einen neuen Version geplant. Das funktioniert auch wie mit Jamel. Du musst die Systeme auf die Backup-Systeme durchführen, um die Fälle zu downloaden und die Backup-Systeme zu importieren. Das funktioniert auch wieder wie vor. Das ist sehr gut. Aber es gibt auch etwas Schmerz, oder nicht so gute Dinge. Manchmal ist es etwas schwer, die Informationen zu finden. Ich hatte Probleme mit dem, was ich für die Informationen zu finden hatte. Aber am Ende habe ich gefunden, dass es kein Problem war, weil es nur ein Verbrauch und eine Weile war. In diesen Kletzten ist es etwas schwer, die Error-Systeme zu finden, aber es war keine Error. Das ist warum es nicht funktioniert. Es könnte besser sein, aber ich werde dir in der Workshop ein Overview zeigen, um zu Hause zu unterstützen und dir zu den verschiedenen Leveln der Loggings zu zeigen. Dann vielleicht sehen wir, was ich meine. Das sehr wichtige ist, dass du nicht zu klein die SD-Karte schiesst. In dem Beginn habe ich 8GB SD-Karte gebraucht. Ich dachte, es wäre genug, aber man hat historische Daten, die auf eine bestimmte Zeit gebraucht werden. Und depending on the number of sensors, this can be increased very fast. So, don't choose it too small, but I think 64GB, 128GB, should be way more than enough. Another point is, don't use too slow SD-Karte. In the past, it was only possible to have everything on an an an an an an an an an SD-Karte. If the SD-Karte is too slow, this takes much of time to boot the system. It's always kind of laggy, but if you put an very old hard drive into an up-to-date computer, you will get the same for sure. So, have a look on the performance of the SD-Karte. I can give you some recommendations later. But for now, it's just a hint to take care of the speed of your card. Another thing that disturbs me is that you sometimes have duplicated items. If you're using Bluetooth tracking or something like that, over time, you get duplicated items. Or if you have multiple devices with the same account locked, and you get duplicated items. And it's quite a mess to delete these duplicated items afterwards. Yes. So, a very, very, very short preview to the workshop that should be started really soon. If you want to attend or to have you to want to attend. I will give you a short overview for the beginning. So, we will install some Home Assistant on a Raspberry Pi. Or if you want to use VirtualBox, we can also install it in VirtualBox. I will give you an overview about the user interface. We will create some sensors together. We will create some actions. And I will show you some useful hardware, like these NodeMCUs, or these D1 Mini, to build your custom sensors for your personal needs. Because the huge problem is that you start a specific point and want to increase over time everything. Because usually you can't plan everything right now. Or maybe you don't have the money to buy everything right now. And you can go step by step and to save some money it's way more cheap to solder things by hand. Yeah, and for sure I will show you how to extend the possibilities by adding community integrations. You have a lot of integrations already built in, but sometimes you need a bit more. And therefore the community integrations are quite nice. And I will show it to you in the workshop later. Also we have the ESPs. I will show you how to flash these ESPs quite easy. I will show some examples with movement and temperature sensors. I also have some relays in my home to open and close the garage door or something like that. Also in this workshop can we show later. An interesting tool that I would like to show you is the ESPHome thingy, because flashing those ESPs is quite easy with ESPHome and also configuration of different components is very, very easy. So, now we can go to the workshop. So, if you have questions, you can ask these questions in the workshop later. You can go to the jitzy room at jitzy.cbase.org slash rc3 slash rt3 fu and the super secure password 23 fu42. And yeah, if you would like to start by yourself, you can simply go to the URL on this checkout link, www.homeassistent.io.hasio.installation. Have a look on the type of installation you want to do. As I told you, you have different possibilities. You can install it as a VM. You can put it on a Raspberry. And even for putting it on a Raspberry, there are several ways to do this. The most easy way is just to use the generic Hust or S image, where you have a home assistant operating system and everything is installed inside a Docker container. You can do the recipe or something like that. Install it by yourself via Docker or directly install it. Okay, we can see that a lot of people will attend the workshop from TECO. We stop our stream now with TECO and TECO have fun with the workshop. Good luck. Thank you. See you.
A short overview about home automation basics and history as well as a hands on to HomeAssistant and ESPHome on a RaspberryPi. If you ever wanted to get started with home automation you should start here. I would like to give you a overview about the History of my personal HomeAutomation Experience with good and bad things i had to deal with. In the first part of the Workshop i like to talk a bit about the basics of home automation to ramp up everyone. A rough overview would be Home Automation - imagination vs. reality Sensor, Actor WTF? What hardware to be dealt with Evaluation of HomeAutomation hubs Which software/standard will cross your way on the component side How to bring them all together In the Second Part of the Workshop i will guide you trough the Initial Setup of HomeAssistant(hass.io) and try to integrate whatever you have at home together with you. Also i will show some tweaks that are not come with the default setup. Topics: Hassio Installation Hassio Configuration Intro to the hassio dashboard Intro to HomeAssistant Automations NodeRed vs. Builtin Automation capability NodeMCU/ESP32/ESP8266 the easy way Customisations without any limits What else you have in mind for home-automations?? What to Bring to the Workshop? Raspberry PI 3 B/B+ or 4B or a Virtualization that can handle VMDK, VHDX, VDI, QCOW2 or OVA.
10.5446/51886 (DOI)
So, welcome back to the C-Base channel. You know the face? It's right again. Today, or now, with his talk about AVIO, Audio Video Input Output. Stupid, but it's great. The stage is yours. Hello, welcome back or welcome to this lecture. This lecture is about a little bit of an artsy thing, but it's also a technology demonstrator because maybe you followed my earlier talk from an hour ago. I already mentioned that this project was built with the isomer framework which was developed for the hacker-free operating system initially. And I wanted to showcase one thing is that Python developers always get stupid comments like Python is slow. And I was kind of set out to demonstrate that that is just untrue. It just depends on how you use it. So I started building a multimedia system or solution that is, yeah, well, let's see what it is. I hope you like arts because this talk introduces you to AVIO and AVIO was made for arts. Arts in the context of many things. What does AVIO stand for? It's a short acronym and I know it's a little bit stupid, but it stands for Audio Video Input Output. And actually, it doesn't stop there. We are taking much more than just audio and video data and mangling it. We're also taking lots of weird inputs and outputs. Like you can control stuff with your joystick and we'll see later. It came up because I had all these formats and I wanted to mix and mingle them and be creative with them. So if you look at what the cool kids are doing in the industry, in the music industry or VJ industry, whatever, they all have cool tools. They have tools like VVVV or Blender or something. Everybody has to have some tool in use to make their performance greater. And AVIO aims to be the Swiss Army knife of these tools. But it's got some focus again, but again, not. It's complicated. But you can combine it with any other tool because of the multitudes of inputs and outputs. So let's dive into the technical aspect of the software suite. It's actually just a bunch of lots of imports and some very interesting glue to get everything together. I'm standing on the shoulders of giants here because Python learned so many tricks regarding multimedia and various input and output formats. For some things, you just have to import this and then use it and you're happy. Yeah, please clap now. It's not really that much effort, but I think the collection or how to glue it together, that is what makes AVIO special probably. Behind the scenes, as I already mentioned, it uses the ISOMARE as core framework because it brings some facilities that are really useful in building such a tool. There's a web front end which you can use to configure various parameters of your operation and it's got live previews and you can use it as renderer. But it's also got the full power of the ISOMARE back end as in modularity and components. Let's see. I think I have a slide about that. Yeah, we're getting into the gritty details now. So the general idea of AVIO is that everything is kind of like a first class citizen. I'm not really focusing on any aspect specifically, but every idea, every part of it should get the same attention and be intermixable in any imaginable way. Some of these ways don't make sense, others make a lot of sense and the kind of drive that you should have when working with a suit is to try out things. It's very experimental and sometimes you delve up on very interesting things to do. The overall system component architecture allows us to combine various aspects of technology together in new and sometimes meaningful ways. It's much like pure data where you can build graphs of components that communicate with each other to achieve certain specific goals. Blockins can be developed and built with the ISOMARE infrastructure in mind so you have some general purpose tools for communication or network operations, but also some simple components like a Pygame input component where you can use SDL input devices. The components are communicating by event-based messaging, so you just emit your data and somebody else might pick it up or might not. Depends on what components you're running, but you can design concise graphs. This allows as-and-cronus handling, which is very important because I don't know when there's some media input coming or some joystick input. Everything happens on the fly, so everything needs to be processed as such. This also allows for very efficient computation. If you do it the right way, think of streams and then you're pretty much set. The detailed user interface, which is not really performance-oriented, runs in a web browser by web servers, so you can fine-tune things or load configuration data or whatever. This is not meant immediately as performance interface. I'd like to get the computer out of the way when I'm performing as a musician. This is just for setting up. To actually be able to do something with Avue, you need a little bit more than just maybe input and output components. There's a multitude, a real bunch of components for different kinds of things to do. I come up with some ideas every few months. Just recently, I built a beat counter, which can allow you to synchronize better to music or a joystick interface for switching presets, for example. There's lots of batteries already included. We have human interface device support for gamepads, joysticks, analog sensors in them. I was very astonished to find out that certain gamepads, although they look like they have buttons. There are analog buttons, so you get 26 analog inputs on one of these controllers, for example. There's also cameras and other open-cv-based sensors available. We have MIDI input and output via Pygame, but I'm working on other solutions as well, so you can communicate with Jack immediately. There's also an OC library that I integrated, so you can get data from OC controls or send them out. You can obviously import and export various file formats. I'm working a lot with animated image sequences like GIFs, but I also loaded videos already. There's all sorts of weird stuff that might come in useful depending on what you're doing on what you already have. It's easily extendable. You can write a protocol adapter in less than 10 minutes, and it's good to use. One interesting part is the output. We talked about lots of inputs. How do you output the result? Well, with the databases, it's pretty easy. You just send out some MIDI clock or data or some other control information, but sometimes or very often you want to render video data, for example. This can be done in future. I'm working on this with the FASER IO library in the JavaScript front end, so you have some rendering head that runs in your browser and can make use of 3D surfaces or 2D arts, and you can play back sounds and music if you want to. You could stream audio from the Avio server itself. Talking about server, it's obviously very strongly attached to network devices, so you can have multiple machines running on your system and have one dedicated to this task, one dedicated to that task, and they can communicate with each other and exchange meaningful information, like scenery control data or something. But this mostly needs to be built by hand because there's not much infrastructure yet to automate these kind of things. While I was playing around with mixing video sequences for our Marta Lite, I don't know if we can sweep the camera to that maybe, but yeah, someone can do that quickly. It's a 16 by 40 pixel Lite. Some people may have known this, may know this for some years already. It has been at Congresses, and it was one of my prime output examples because mixing video information for this tiny display is really, you can even do that in PHP or in basic or in shell script. People are doing it, and there's actually really nice applications working with that. It's a perfect candidate for Avio test runs. I started mixing video information, I think, four or five years ago, and this was actually the groundstone for the Avio framework because it started that way. But at some time and point, when I was really bucked off by all those naysayers that Python is too slow, I decided to just increase the frame buffer a little bit and take bigger input imagery and mix that. So I was mixing six to seven full HD streams in Python in real time. And I think that's pretty impressive considering that there was no optimization going on at all. I was just using NumPy to transform these matrices into each other, and it worked. Since then, I've gotten many pointers and input on how to build a blazingly fast working system. Like there's approaches on doing this on the graphics card, because with textures, you can be so much faster. And you could animate textures by just drawing them by, for example, or there's other approaches. Many interesting ideas came up from some communities, and I hope to be able to add some of those in the next few months. So we get a fully fledged video mixer for full HD or even more resolution capable. So what did I do with that already? Some stuff was just too good to not try out, and some stuff stuck. Other experiments were not so successful, but let's check out some. I already mentioned the Matalite mixer. I'm getting ahead of myself. But this is really a nice tool, and I hope to have a nice front end for controlling the VJ functionality soon. I've been building something with a web front end where you have, it's like a mixer deck, and you can see several slots and add more of those, and you can also render text input text labels, which is all preparation work for a larger system that is capable of not running on just 40 by 16. Then there was the Virtual Vibrato with a Sony 6-axis controller. Hello, Sony. That's really nice that you developed a Linux driver for your recent gamepads, by the way. They are not completely evil. I love that. I was playing around with that a lot because it allows you to get a lot of analog input, and it's conveniently already bite-sized so you can just take it and translate it into MIDI data. That was what I tried, and then I hooked it up with Bitwig Studio and added some of those modulators to the pitch frequency of a synthesizer. So I could, with the accelerometer way, by shaking the gamepad, I could play a perfect vibrato, and I could tune it. You can shake slowly, you can shake fast, you can shake hefty or just small movements. It's very fine tunables, like playing a real vibrato with a real instrument. But you can add it to any aspect of your synthesis process. You could also convert this into mixing data for VJing, for example, or you could just hot glue the controller to your instrument and then do some movement things on your FX chain, for example. It offers lots of possibilities, and sadly, I don't have too much time to try out new things. I would probably be doing wicked shit with it. So why am I focusing on this isomer aspect? Because the isomer framework is about to get some upgrades in the next few months that are really beneficial for Avio as well. Like there will be pipes and buffer tools for more protocols which are not core related to Avio but are sharing common functionality with other applications that were built with isomer. For example, we have added MQTT for the sailors to get sensor data across networks. This may be used as well for performance situations. There's strong support for command line tools, which may sound like it's not so relevant, but I catch myself quite often fine tuning stuff with command line tools I wrote. The comprehensive configurable web access allows to collaboratively work on your performance because essentially this gives you, over the client server infrastructure, multi-user access to what you are doing. People can fine tune their aspects of their show where they can completely control everything if you want to. You can also limit that by permissions, but we are artists, we are not limiting ourselves. Then there's this aspect of peer-to-peer with mesh-based networking and this opens whole new areas of performance for large auto sceneries, for example. And I'm really wondering what the community may come up with. I hope you have a look at this and maybe adopt it and try out what you can do. So I sure hope you like isomer and avio by now. That pretty much concludes my talk. Maybe we have some community questions going on now. Perhaps. I'll never give up the hope. Otherwise, if not, you can always find me online. I'm riot at seabase.org. I am riot at freenote and there's several other channels you can contact me over. So yeah, I hope you enjoyed this talk as well. And there will be another last talk from me at 8 p.m. this evening. It's a German lecture and it's about the Leerstandsmelder, where I will be presenting a social tool to get a better stronghold on illegal activities around Leerstand vacancies of shared flats. Thank you and have a good LC3. Bye. You
The AVIO project is a Python based approach of bringing together visual media with audio, especially music. This lecture gives a general overview and invites you to join this playful project. AVIO is built with Isomer and consists of various integrated components that comprise an easy to use interface that allows synchronized processing and mixing of various input datastreams. Already implemented data sources are opencv cameras, animated gifs, MIDI, osc, various sensors and mqtt as well as human interface devices like gamepads, space navigators or joysticks. The talk consists of: * an overview * some particular technological details * demos of what's already possible * an outlook into AVIO's possible future (e.g. as embedded modular device)
10.5446/51913 (DOI)
Hello and welcome to this talk on KaosSoneTV or TV. We are on the RC3 2020. This talk will be about FPGAs, Field Programmable Gator Rays, and it will be given by Pepín Devoz about his experience in documenting the Go-Win FPGAs. Have lots of fun and enjoy the talk. Hello everyone, I will be giving a presentation about how to fuzz an FPG, but the short answer is it depends. So I will mostly be talking about my particular experience and hopefully someone will get something interesting out of it and try to apply it to their own interests, whether that is contributing to this particular project or starting their own, or contributing to another one. I am Pepín Devoz, I am a software developer and IC designer. My mission is to make better open source software for chip designers. Here is my website and to link to the project what I will be mostly talking about, which is about reverse engineering the software of the Go-in FPGAs. I have also linked my Patreon and GitHub sponsor page which helps me do this. I want to thank all the people who are already supporting me, which is mainly SymbioticUDA where I did my internship on this project, and also these other kind people. So I will be talking first about some background, about how an FPG actually works on the inside, a bit about the open source tools that are there that we will be working with, and then about documenting these FPGAs through fuzzing and decoding, and why I say documenting here and not reverse engineering, and then some results. So an FPG, well programming logic devices in general, you start with some hardware description language generally, mostly FijiDL and Verilog, and then you do synthesis which goes through a net list of logic blocks such as N and R gates and other things, and flip-flops which are the memory, the state of the program. And the way this is implemented on an FPG is through lookup tables and multiplexers where the lookup table is basically a piece of memory that stores the true table of certain logic operations, and then there is this flip-flop for the memory and multiplexers which are used to route these connections to neighboring tiles, because inside the FPG are sort of a whole grid of similar tiles with some exceptions for special purpose blocks for DSP and memory and those kind of things. But the core is really logic tiles which are generally a collection of lookup tables and flip-flops which are called together a slice, and then there is this routing multiplexers that connect various outputs together, and the way they connect is by intertile wires mostly. So this tile has connections to 1, 2, 4, 8 tiles in any direction neighboring to it, and you can connect anything to anything almost, so not anything to everything, they try to kind of optimize how much you can connect, because if you would connect everything to everything there is a lot of multiplexers and it would take a lot of area, and already these multiplexers take off the majority of the slices, so the FPGA designers really try to optimize how much multiplexers they can get away with. You can also generally see that the more advanced FPGAs have actually less multiplexers because they have better software. So let's take a look now at the software, how the process works from VGL to the Bistro Bithylonian to FPGA, and of course the first step is the programming language, and then there is the synthesis tiles, well for the commercial ones, the Quartus, Vfado, ISE, and then the open source ones. The popular one is like the YOSIS, which uses ABC for optimizations, and then this output is in that list in various formats that are supported by the tools, and then there is a place-to-place and route step where you map these logic elements to a specific location on the FPGA and connect them, and in open source tools this is sort of separate, where you take the assembly language of the FPGA and generate it to an actual Bitstream, but in the commercial tools this is generally one step where you just input your netlist and you get a Bitstream out. So this is kind of the things that need to happen if you want to have an open source tool flow for your FPGA. So yeah, the first step is synthesis, which is in this case YOSIS, and this is work, but it's just software. You can open the documentation from the vendor and you see all their primitives, and you write synthesis for them, and it works. There's no digging involved into the software really, or figuring, yeah, I don't want to get too much into it, but it's a work, but just writing software, but for place and route the vendors generally don't really tell you how exactly the FPGA works inside, so it's a little bit more difficult, and that's why you'll see that YOSIS has support for a ton of FPGAs, also other things like ASIC, so the OSIS proceedings, Altera, all the other FPGAs. Well, next Pinar is much harder, so they started with Claire Wolf and the IZ Storm project for the IZ40 FPGAs, which then co-expanded to the ECP5 FPGAs, also of Lattice, and these are the two main architectures currently supported by Next Pinar, but there's work going on to expand this to, well, go in my case, and some other projects that are also working on this. But what Next Pinar does not do is generate the actual bitstream, so we need this extra step where you take sort of the assembly language from Next Pinar and turn it into an actual bitstream as you can program to your FPGA, and what this talk is mainly about is figuring out the bits you need to do the Next Pinar bit, where you place and route all these multiplexers and lookup tables and flip-flops that are inside this FPGA. So to get started, step one, get the license. This is an unfortunate part of commercial FPGA development, is that you have to get the commercial software and this can vary how it works, but for going you have to fill in this form and this can be a slow process, and from what I've heard, it's preventing some people from using it, for example, in university when you have your whole classroom wanting to work on FPGAs and you have to wait a week for your license to arrive, it's not a very nice process. So this is also where the source can be really advantageous. Next step is to get an FPGA, or maybe, depends, maybe you have one already, the nice thing about going is, Cipid release is really really really cheap FPGA board, so you can just spend five dollars and you have an FPGA. Yeah, this is second, because you don't immediately need it, the first part is just getting the software, because only once you get your software working can you actually use the FPGA. And then actually install the software, which is also not trivial in some cases, because, yeah, there's software for Linux, Windows Linux, but in reality Linux in the EDA industry means Red Hat Enterprise Linux, so if you are on Ubuntu or Arch Linux or whatever you have, you are probably out of luck. Some of this way, bro, if software is not deleted as well, because the ship is fairly old, like C++ and the library files that don't work on modern Linux, so you just delete it from the bundle, then that usage of system library and it generally kind of works. And then the first step is really boring, just read the manual, try to use it as it's supposed to be used, and try to get as much information from it as you can, because like everything that's already there, you don't need to document yourself, so it's really a time saver, you spend some time here basically. The goal of this step is basically to find a way to take the lowest level input that you can generate and automate it, and then get as much information as the generated reports as you can. And for going, this means they have a TCL shell, which you can write scripts to input a netlist, fairy log, whatever, and then synthesis and place unrooted. The synthesis tool produces a fairy log netlist, so the lowest level you can go in this particular tool is place unrooted this netlist and then constrain every single cell you put into it to be in a specific location, so you have sort of a deterministic output. And the positive side of this particular FPGNA is that their bitstream output that goes into the programmer is an ASCII format, so I mean it's not human readable, it's still binary, it's really weird, but the framing is already done for you, so you don't need to look in the hex editor. The downside of this particular tool is that the other outputs are kind of useless, so there's not much info on timing or routing or anything that you could use for information from. We'll get to that later also. Now on to fuzzing. Fuzzing on an FPGNA basically means you generate a bitstream, you change the tiniest, tiniest, tiniest bit of information configuration about this bitstream, generate another one, and then you compare them and then you know that this is a minimal change that I made, result in this one bit flip or whatever, and then you can sort of start to make these connections what the meaning of all these bits are, and then you repeat it, then repeat it, then repeat it, so that's the basic idea. So the one is you write a netlist in here, I wrote this fairy log netlist that we go when IDE uses. It has a module and then this LUT4 primitive which is a lookup table with four inputs, which is what Go-in uses, it's a LUT4 architecture. And then it has a 16 bit, so two to the fourth sort of memory where you put the truth table in, which is a parameter that you can tweak. And then you write it to a file, so in the beginning I just read this really stupid sort of bash script system where you can just set on the file to replace this init parameter with an actual value, and then you just loop all the combinations and see which bit changes or something. And the other thing you need is a constraint file, because you want to place this LUT in one particular location, so you know that this location corresponds to these bits in the bit stream, and then you also update this location to whatever you want it to be, because in the end you want to find all the bits of course. Then you make a TCL script that reads all these files, runs the PNR, you get your bit stream and whatever else output you want, and then you look at the bit stream and see if you can make sense of anything. As I said, for Go-in it's a lot relatively easy because they have this ASCII grid of bits, so there's this header which is fairly constant, I just ignored it and hope to base it for myself in the beginning. There's some checks, some things in there, but yeah. And then there's this giant block of bits that are just like, yeah, they get mapped to the FBJ. You see to the left there is some bit of padding, and then it starts with actual bits, and then I found, or actually someone else found it, looked at it before me, that at the end it has this CRC check to ensure it is transmitted correctly, and then there's more padding at the end. Yeah, and then there's a footer which I left off here. It's worth noting that for the more bigger FBJs, I've heard it's more like a command stream, so it's not as easy to change one bit and see the resulting bit because once you start moving stuff around, the command stream also changes, so you need to dig some deeper to understand the command stream before you can actually map bits to each other. But that's the basic idea, and this for Go-in was relatively easy. The trial also said in the beginning it depends on the FBJ, like there's no one guide, you can write two D steps and you end up with a free open source toolchain. It's a lot of discovery and trial and error. I wrote this little Python script that takes these bit streams and makes a nice image out of them, or nice, it makes an image out of them, where you can see all these little lookup tables, with the harder squares with the flip top next to them. Yeah, from there this was basically a big NumPy array, and then you can just explore two bit streams and you get the difference bits, which is how I figured out the differences. So yeah, congratulations this was your first fizzling of a bit. Yay, only a million more to go. And this is kind of a problem, so this each PNR run takes a couple of seconds up to a minute depending on what you're doing, or even longer if you're not concentrating everything of course. So if you can imagine, you know this FBJ created a lot of bits, millions maybe, maybe a lot less, this is a relatively small FBJ. So if you take a minute per bit, you will be there a while, even if you have this B for computer, multi-core, it'll be slow. So you need to be a bit more smart about it to make some real progress. I mean with this you can already do some fun things, like you can for example tell the vendor tool to generate a bit stream, and then you can see okay this was an AND gate, and now I make it an OR gate, but just manually flipping some bits, and then you can test it on your FBJ and see if it works. But if you want to get really something practical you of course need to understand a lot more bits. And to make this more practical there is the binary trick, it doesn't have really a name or anything. But the idea is, okay so imagine you have these lot bits, there were 16, but here's like 8 fit on a slide. But it's that you want to find a location of, normally you would flip one bit to run to another bit, blah blah, very slow. There will be 8 runs in this case. But what you can also do is in each run sort of assign a binary number to each bit, and then flip them according to this binary number. So you can see A is off in all runs and then B is like 1 and then C is 2 and then you see. And this way you can only log 2 runs for a number of bits, which is of course much more efficient. The problem with this approach is not all combinations are unique. For example there are bits that you're not testing, which will always be null, 0 all the time. So these will not show up if you're testing A here. And there's also some bits that are always on no matter what you're doing. And in this case A is also 1 all the time, so that's a bit of a problem. The other problem is that some bits have weird combinations of relations to single-feature tweaking. So it might not be always a 1 to 1 relation to the thing you are tweaking in your code and the thing that shows up in your bit stream. So in this case a simple example is B or C, which conflicts now with D. So you wouldn't be able to tell D apart from A and B and C. That's a bit of a problem. For example it shows up in IO banks. Every side of the FPGA has an IO bank which you can turn on and off. So if you turn using any input output buffer, any pin on any side, it enables this whole bank. If you use this binary trick, this bank will always be on basically and you will never figure out what these bits are. So you need to be slightly more advanced, which is the balanced constant weight code, which is sort of a hamming distance-related thing. But the simple explanation of this is that for each number, the number of 1 and 0 bits is always the same. So in this case there are always 2 0s and 3 1s in each number. And you can only imagine that if you have the end of two of these numbers or the or, it will have more than 3 or less than 3 1s in it. So already you don't have a number with all 0s or r1s, but also logical combinations. There are some technical terms for this, but most logical combinations will have also a unique code to them, so that they don't conflict with your other bits that you're testing. This is always unique, but it takes a bit more runs than the binary trick, which is a fair trade-off I think. I'm not exactly sure what the complexity of this one is, but it's still better than straight 1-bit runs. The only problem is, imagine you are now fuzzing these input-output buffers. You're going through all of them, but you're always enabling one of them, for example. And this would mean you never see the bank going off. So you need what we call meta-fuzzers. They're sort of the first collection of other fuzzers, basically. So you add extra runs that deliberately trigger these more complex relations. So you have to sort of... Well, you can write a check. So I found a combination that I don't understand. For example, the end of two bits. And then you can give an error, and then you're like, okay, what is this? And then you can find a hypothesis and write a meta-fuzzer that says, okay, I expect these bits to be the end and the lower of each other bits. So I expect them to have this pattern. And then you have this meta-fuzzer that specifically triggers this pattern so that you turn on these combinations of bits and more complex relations. And you also do with these constant weight codes, but there's only one here, so you don't see it. But yeah, that's meta-fuzzers. They are kind of tricky. Yeah, so that's an overview of fuzzing. A roadblock that I went into with Goan is that there's no control or insight into routing. From what I've heard from like ISE 40, ECP5, other FPGAs, usually you can either control where you want the wire to go, or at least you can inspect which routes the vendor tools choose. And either way, you can then correlate these things. But it appears, as far as I know, that Goan doesn't offer this kind of control. So this makes it almost impossible to fuzz the routing because you can't change a single, like a simple thing. You can't have a minimal change that sort of changes the wire it uses. You just have to sort of rely on the router to pick a different one and sort of push it a bit in the right direction, which is really a pain. So the solution is to also look at the files provided by the vendor. So this vendor has this tool and of course they also have data files that describe their FPG, their binary files, more documented at all. The favorable thing for Goan IDE is that it doesn't have an end-user license agreement like most of the SILENX Altaira kind of tools that prohibit you from messing with them. So I'm not legally saying anything about this, but this makes it quite like a kind of, okay, maybe to have a peek at them. So yeah, I complimented this fuzzling with looking at the files the vendor provided. So yeah, basically the goal is to refer to engineer this file structure and write a partial for it so you can extract the data from it. You can do this in several ways. You can just stare at the hexatomp and try to make sense of it. And my interfaceship supervisor actually was like a superstar at this who could just stare at this hexatomp and see immediately all sort of things. I was like, how did you do this? But I'm not such a superstar at this, so I kind of went for other approaches. First, instead of running the program in GDB, you can just set breakpoints on things and see if you can extract some information from the memory in the program. And also you can decompile the program and look at the assembly in something like VITRA. I don't know how you say this. So for example, in GDB, one thing I did was run a TCL shell. You let it start up so you don't get all the start-up noise. Then you set a breakpoint up on the fopen call. Continue, run the place and route. And then continue, continue, continue a few times. And eventually you find this interesting function call where it reads the route data table. So there's some debugging information left in this library which is convenient. And we had pointed to this data file of the Go-In IDE. So I opened VITRA and looked at this route note table class. And it turns out that it reads this file straight into a struct, no decoding to struct and to disk and struct from disk. But it had all these getter and setter methods that are exported as symbols. So I could just take this setter name and the address it was pointing at and this would directly correspond to data in this data file. It was a bit nabory going through the data file, getting all the setters and make it into a little Python script. But in the end you get a Python script that can extract these things and at least have some names to them. The first one thing I did was write automated GDB scripts. You normally use GDB like interactively typing into it, but you can also just tell it to load scripts. And this script in particular breaks another function that reads another type of data file. And then it breaks at every read out of this file until it should offset into the file on how much bytes were written and the function they were written from. So you can sort of correspond these functions to the addresses and blah blah blah blah. And then this particular file was more of like an archive with tables and different things in it. More of like an actual file structure that was a bit more involved. But then you write a parser for it and you can extract this data. Unfortunately, data is not equal to meaning. So you have this bunch of binary numbers and you don't know what's going on. And they also did some interesting techniques for it. They encoded data as like decimal digits of a binary number. Interesting. But okay, you can look at this data and you have some names from the exported symbols. The simple ones are like the little bits. The little bit one is this but a bigger challenge was routing, which was the key thing that I wanted to do this for. And this in the end I managed to figure out and extract the routing information from this FSE file. But it wasn't completely brilliant solution. For example, the IO buffers, they tend to be very complex in FPGAs. They do a lot of different folders, levels, different modes, different everything. And I started making sense of all these random fuses and things. And then it turns out they're also different per FPGA. And I was like, oh my God, okay, this doesn't make any sense. So then I went back to fuzzing. The nice thing is, yeah, once you have the fuzzing or the chip data, the fuzzing becomes a lot easier because you kind of know what you're looking at. So you have this from this data file, you can sort of extract the tiles. And you know, okay, all these files are of the same type until you know their boundaries, their size. So you don't need to duplicate all the fuzzing work for each and every tile. You can sort of fuzz each tile separately. So this leads to some simplification of speedup where you can fuzz per tile, basically per tile type. So yeah, in this case, the little bits and the basic routing must from the vendor data directly. No fuzzing involved. But the IOB and data flip flops are fuzz, basically. So what I did in this case was like my third fuzzer with first, you know, the batch script. Then I did binary tricks. And then I had this tile fuzzer after the chip, they made decoding. And what I did here is, yeah, you take like, you know, okay, you have all the modes that you want your flip flop to be in, for example. So you say, okay, I have tile type 12. For each flip flop type, put one every tile, fuzz like run the PNR and then you extract, you know, you go back to the tiles and see all the different modes. This particular flip flop can be in. So you do this per tile type and you can go much faster. You don't have to do all these binary tricks. What you do have to do is logic for combining as much different fuzzers into as little runs as possible to optimize for speed. This can be quite confusing and complicated. And I think the limiting factor here was the iPod buffers because as you can see in the middle, there's a few that are only like a specific type. So you need, you know, you need all the types of IOB that you want. And this particular tile type, you need to do it for this tile type and there's only one of them. So it's kind of slow, but it's still more efficient than all this binary tricks because you only have to do it once per tile type. So basically the hard part was the clock filter. So I didn't talk to you too much about this yet, but in FPGA you have like the inter tile routing, which is like these wires to neighboring tiles. But there's also like global routing, which are generally used for clock trees and resets and other like high fan out signals that you want to cross your whole FPGA. And go in, there's like eight of them. And well, yeah, their muxes are in the chip DB. So in this sense you have the basic data, but like, okay, so for the inter tile routing, their names are kind of obvious. You know, there's like north two tiles, number three. So it's the third wire. And you sort of like the information is encoded in the name for the clock router. It's not like it's very irregular also. If you see here, like these horizontal lines are spines. They go from the center tile where every global signals come in from the PLL and the input pins. And then they sort of spread out across the spine column. And in each spine column there is one multiplexer that connects it to a tap, which is just like a vertical running wire. But there's only one tap per column. So you have to kind of figure out which column is connected to which spine basically. And you can see here sort of like 12341234, but you don't know that you have the red fuzzers that are basically take up the whole FPGA where you just sort of scan flip flops across the road to see which spine they get connected to. And also this local horizontal branches, they spread out a few tiles from this tap. But it's kind of irregular how far they spread and which step they connect to. So again this buzzer has to sort of sweep flip flops across several hundreds of brands to see which row connects to which branch and then to which step and which spine. And in this picture I drew like the four primary ones and there's also four secondary ones and it's a big mess. So this was some recent work that I did to improve the global flot routes and now working on this next VNR part to incorporate that. And after you've done all this fuzzling on the Staldy Coli you can sort of figure out the tile format. And you can see here at the bottom eight rows or what? Four maybe? I think four actually. Well a few rows. There's like the lots and the flops. And then everything above it is just multiplexers. It's like you know 80% multiplexers. Which is I mean like even if you knew that there's lots of multiplexers this was just kind of still like oh wow. It's okay in a moment to see how much it is. Yeah this is just a picture I generated from this fuse file where you just color everything differently and the memory was on top of them. Pretty picture not that insightful. Yeah then you can start to generate these placements and routing for the stuff that you decoded. And you have your fully open source FPGA toolchain. This particular one is running a RISC 5 core that is calculating primes and it's running on this Go-In board of trans electronic that's also laying behind me here. And yeah that's my story and this is Project Epic Cooling and it's on GitHub. You can check it out, contribute, start your own FPGA reversing engineering project or join some other one. Well I hope you enjoyed it and thank you for listening and I think there will be a Q&A after this. So this concludes the talk. Thanks very very much Pepin. Very very interesting. And of course there are questions. One question that comes from the IRC channel is does the Go-In software provide a simulator for which useful data like timing or so could be extracted by just observing the simulation process? Yeah so thank you. They do not provide their whole simulator. They do provide behavioral models for very long. So you can take your very long models from them and simulate it but that's behavioral simulation. It doesn't include timing data. There are some encrypted models somewhere floating around I think but there's of course encrypted so you can't easily extract timing data from them. So then it's easier to just decode the timing databases they also have. No, all right. So because I didn't see another question, thanks again for your great talk. I see that you have a talk right now as well on the same channel. And thanks again for the people who still want to ask questions to Pepin. Just go to the IRC3 chaos zone channel and ask your questions right away. And Pepin will try to answer them as well. Thanks again and see you soon. Bye bye.
Over the last few years yosys and nextpnr have gained traction with a fully open source flow for Latice iCE40 and ECP5 FPGAs, with several efforts for other FPGAs on the way. In this talk I will share my work on Gowin FPGAs, and explain key concepts needed to contribute to more widely supported FOSS tools.
10.5446/51916 (DOI)
Hi there! This is a talk about creating your own programming language, specifically a language that doesn't suck. And for many of us, the language that does not suck is the first language that we learned. And for me, this was the language that ran on this computer here, the Apple 2e, which was a computer that my parents gave to me in 1983. And I just took that picture today, it still works, and it runs a language called Applesoft Basic. You can still find documentation on Applesoft Basic to this day. So here's for example, an introductory manual. And if you look inside that manual, you can see what a basic program looks like. You can see it consists of a bunch of lines. Every line has a line number, and on every line, there is a command. So it's none of that methods and classes and functions, nonsense that we have today. It's just straight up do this, do that, jump to that line, and it was a lot of fun during the day. So we're going to use the Racket system to implement that. And Racket is great. It's an entire toolbox that you can use to create your own languages. It comes with many languages. And it's just a lot of fun. And if you want to play with the code for this talk yourself, I put everything on GitHub in a repository called MikeSperber slash RC3. And that also links to a download link for Racket and has all the code for today. So anyway, I have to warn you, this talk is pretty heavy on code. So that's pretty much the only thing that I'm going to be doing is I'm going to write code in front of your eyes. And if that's just too much for you today, I'll understand and I won't be mad if you go see another talk. So but if you're in for some really serious hacking, then this might just be the talk for you. So Racket, when it starts up, comes up with a blank screen, such as this one was quite blank. Each file in Racket starts with a hash lang line that tells Racket what language that file is written in. And the system actually comes with many, many languages, one of which also happens to be called Racket. Now, Racket's okay language, but it also has some annoyances. So we could print something out by calling the display function. And if we run that program, well, down here, you can see the output. That's fine. But sometimes we also want to display several things, for example, a little bit of explanatory text along with a computation. So this year, well, you can see that the default Racket language is a Lisp dialect, which always has parentheses around compound forms, and has an operator always in front. So you don't write, you know, one plus one, you write the sum of one and one. Now we really, well, we hopefully we know what that should print out. But if we run that, we get an error message. And that's because display only takes a single argument to print out. And the second argument is supposed to be a port, which tells display where the output goes, if it goes to a file or the repl or whatever. So really, if we want, you know, several pieces of output, such as this, then we need to put in several calls to display. So there you go. Well, that's super annoying, right? And even, you know, Apple soft basic at the thing where you could just say print, and you could just put in several things. So you could do this thing that we were trying to do with display in the first place, right? But of course, you know, run this print, it's just not built in in the shape that we would like. So we're just going to define it ourselves. And to do that, we're going to write a macro, which means that the this form with print at the front is going to get translated into code that racket already knows about. And so we want to translate this into these two calls to display up here, right? And well, one restriction that we need to remember is that, you know, a form always translates into a single form. So when there's two forms here, in order to turn these into a single form, we just put begin at the front and parentheses around. So this is the translation that we want. I'll comment that out just so that we have that at our disposal. And in order to write a compile time function or a macro, we say define syntax, we're going to define a new piece of syntax. And that piece of syntax always starts with open parent print. And the compile time function that implements the macro always gets called with an argument, which is exactly the piece of syntax that the compiler saw. And we're going to call that form. Now, in order to translate it into this begin form, we need to take it apart a little bit, identify the arguments, we need to parse it. And for parsing, we will use a racket library called syntax parse. And I'm going to import that as a library. So it's not a default feature in the core of racket language. And since we were going to use syntax parse inside a function that runs at compile time or at syntax expansion time, as we like to say, we're going to say up here for syntax, so we can use it inside a define syntax form. So we write, you know, syntax parse here, we want to take apart this form thing. And we will write a pattern that describes the shape of the form. So we'll start with a very simple version of that pattern. So it starts with an open parent print, then there's an argument and that argument has to be an expression, which is denoted by putting colon expression there. And now we want to translate that into a different piece of syntax. In order to create syntax, what we do is I put in hash back quote, we want to translate that into display arc for now. So we'll just do a single argument version for starters. So I'll put that here, I'll run the code. So there's nothing in the REPL now, but now I can say at least Hello World, very exciting, and that works. But of course, it will only work with a single argument. We want to use it with several arguments. In order to do that, we can just say, well, we don't just want one arc, we want several. And to do that, we put three dots here. Almost looks too simple to be true. So it just says, well, there can be as many arguments as we want. And now, as we said above, we need to translate into a begin form. So we want to say like this, right? And then we want to have as many calls to display as there are arcs. And we can also do that by putting, you know, three dots here. And Racket will automatically figure out that that means we should have one call to display for each argument. And make sure that all the parentheses are closed. And let's try that out. One plus one equals plus like this. And sure enough, that works. So now we've added a new syntactic form to the Racket language. Pretty simple as it goes. And we'll use that as a small building block for our basic machinery that we're going to implement next. So, well, I guess it's a good time to save our work. Just to make sure that we're not losing anything, I'm just going to call this file basic dot racket. And there we go. So, okay, well, we got that print thing, which is similar to the print in basic, which is nice. But I don't know if you've ever used basic, it looks a basic program really looks like this, right? It says, you know, print hello world. And then you can see that there's a line number in the beginning, and then you go something like, yeah, go to 10. So it's always the first program that just print something repeatedly. So, yeah, well, this is obviously not there yet, right print, whether there's no line number. There's also still the funny parenthesis and tax. So if you create when you create a new language in racket, the way to do that is, well, you start with a parenthesis syntax. And later on top of that, we will implement the actual line based basic syntax that we might remember from our Apple two days. So, to that end, we'll just start by saying not, you know, 10 print, what we'll say open parent 10 print, you know, hello, world. Eventually, we're also going to have to take care of the lowercase print here, and then say 20, you know, go to 10, something like this. So that's what we're going to start with. Again, we have to think about how we will translate this into code that racket recognizes. And, well, you know, this program, it doesn't do anything yet. It just stores, you know, the print command, for example, it just stores that into into storage of the of the Apple soft interpreters, and I can then start the program by saying, well, I want to start the program at line 10, or something like that. So it's not enough to just sort of translate this into a call to print, we will also have to deal with the line numbers somehow so that also go to 10, for example, makes sense. So in order to store a piece of code, we can just give it a name and racket, right? So the idea is that we will translate it like this, we will say, well, we'll call we'll just define a function, we're just going to call that line 10. And we will make that one call print. And we will then define a function called line 20. And well, so go to of course, in racket doesn't exist, but we can just go to another function by calling it, right? So I'm just going to call line 10 here. And well, you can already see that eventually this this is hardly going to work as line 10 will never proceed to line 20. So eventually, we will also have to put a call to line 20 here, so that the code moves in the right direction. But we'll start simple, we're just going to start with a single line, and then we don't need to call that next function. How does that work? Well, we again say define syntax, right? And in this case, we're just going to define a piece of syntax that we're just going to call basic. Again, it takes an entire form as its argument, and we'll start just with a single command. So we're going to call syntax powers here, pass it the form and say, well, basic. And the way basic commands work is there's a line number beginning, which is an integer. And then there is a command, which is in racket terms, it's just an expression. So, well, what do we need to do? Again, we need to translate it into another piece of syntax. So, well, we can see up there, right? We want to translate it into define. And then, well, there's an open parent, we're going to have to cook up that line 10 name. Also, we'll defer that problem for a little later. And we will just insert in here the command in the body of the function. So, okay, so that leaves the problem of computing the name line 10 from the line. So we want to compute that name by sticking that, you know, line minus in front. And to that end, well, I want to call a function called format it and format it. Well, so it takes something that I'll explain in just a moment. It takes a pattern, which is sort of like printf, except, well, the pattern language is slightly different. So the tilde a says we should insert something in here and we want to insert the line number. Now, I'm kind of tempted to put in line number here. Now, I need you to understand one thing is that, you know, as syntax, a piece of syntax that represents a number is not the same thing as the number itself. So, for example, if the line number is 10, you know, down here in the REPL, I can type 10, I get 10 as a result. But if I type this funny syntax thing, then it says, well, there's a piece of syntax and it contains that number 10 in there. If we want to stick it into the name, we really need the number 10 in order to extract the sort of the actual piece of data that is represented by the syntax, I can call a function called syntax E. So if I call syntax E of, you know, here, hashback quote 10, then it gives me that 10. So really, you know, that's what that line number is. So I need to call syntax E here. Moreover, you can see here that I left three dots here, format it also takes an argument that is called the syntactic context. Now, the macro system in Racket has great sophistication when it comes to identifiers. If you've seen macro systems in other languages, such as in C, they're very simple about names. And you can introduce name clashes, very easily and Racket avoids that. That's not going to be the main subject of this talk. So suffice it to say that as the context, we want to define the line numbers are at the same level as the basic form that appeared. So we'll just stick this here. So just a little piece of magic here that we're just going to have to accept for the moment. So well, this thing, we want to stick that in here, right in here where the three dots are. And to that end, I will just give it a name. I'll call that thing name. And I want to stick it in here. Now you have to understand a very subtle issue here, which is that, so for example, you know, when I have a pattern variable here that appears in the pattern, I can just stick that in the syntax. For example, here I can just put in command and Racket will figure out that I really meant this command here. Unfortunately, well, I mean, there are reasons, but this, that does not work for this, which is a regular binding. It's not a pattern binding. In order to tell Racket to, well, you know, stick the result of evaluating that piece of code here, I will need to put in hash comma name, which says, well, here's an expression, you know, stick the value of evaluating that expression in here. So that's, I think that's confusing at first, but you'll get used to it. So remember, when there's something in here in the pattern, you can just stick it in here. And if you have regular bindings, or if you have a piece of code that you want to stick in your syntax, you will need to prefix that with a hash comma, which is also called unquote. So let's see if that works. We'll run that. See here, I already ran into the first trap here that I just told you about. I called syntax E on the line number, but the same kind of kind of goes in reverse. So if I can't just use a pattern variable in regular code, instead of instead, what I need to do is I need to prefix that pattern variable with hash back quote. Okay. So and then it knows, okay, I need to look among the pattern variables for this. So if I run this, well, then it says format it, it doesn't know about that, and that's because format it in racket is in a separate library. So I'm going to need to jump upwards a little bit and say here that for syntax, I don't just want syntax powers. I also want a library called racket syntax, and that hopefully contains format it, go back to that code, press run, and now at least it's silent. And now, well, I could try doing that by saying basic, you know, 10, you know, print Hello world. And doesn't do anything, right, as it should, it's just to just stash that piece of code. And but if I now type this, it executes that first line of code. So hooray, we've got a very basic, basic version of basic here. That works. Okay, so now we have a single command and a basic program. But of course, we want to have several of them. And so up here, we already know how to do this, or at least how to tell syntax powers about this, we just put three dots here and says there can now be several lines in a, in a basic form. So, but of course, we need to translate that differently. Right now, our expansion just has a single definition for a single line. And now we need to expand that to do several definitions. So to that end, we will record as a functional programming language, we do a map. So we apply a function to all the line numbers and commands, and the function always returns a piece of syntax for a single definition. Let's try that. So I'm going to say lambda here. I'm going to put three dots here, because there's a subtly that I'll explain in just a moment. And now the question is, what do we map over? Well, we want to map over all, you know, all the line numbers and all the commands, of course. So if we have line number here, well, you know, it's not just one line number, right? It's several line numbers, one for each line in the program. So we always have to use it like this, right, that we always put the three dots behind it. Otherwise, racket will complain. And moreover, line number again is a pattern variable that comes from this line in the syntax parse form. So, you know, the pattern variables, they really only work in this hash bang, this hash back quote form. So that's what we need to write. And also we do the same thing with the commands. And we're still not done because, well, you remember, you know, hash back quote returns a syntax object. But what we really need is a list because we're feeding that into map. So racket has a handy function that does that called syntax to list. Also, here is syntax to list for the commands. And that allows us to map over, well, here, the line numbers and the commands. Now, you know, now here's the subtly. Suddenly, because we now have line number and command be regular lambda parameters, they are no longer pattern variables. Okay, they're just regular variables. And this, I think, really is the most subtle aspect of the, of the racket macro system is sort of, you understand this issue, right, the pattern variables come from up here. And anything, everything else, everything that is not here in a syntax parse form in the, in the pattern part is not a pattern variable. So in this case, these are now regular variables. And that means they should not appear in hash back quote anymore. In this part where the line number is, and also means that a racket is not just going to replace the pattern variable by the expansion down here with commands. So I've got to push hash comma here. So if you figure that out, if you figure out that distinction, I think you'll be fine. So let's try that out. I'll run the program. Thankfully, there are no error messages. Let's try again that basic program. Oh, it complains again, there's another trap. Well, not a trap. Really, it's something that we've seen before is map returns a list. And you would see here. So it says your quote open for N. That means it returned a list. And that means, well, map produced a list, but racket really expects a syntax object. You know, if you're coming from lists or from closure, you expect sort of lists to be usable as syntax objects, but racket makes a strict distinction here. And it uses that distinction to track source code locations to give you good error messages and also to track hygiene to keep track of lexical binding, which is not the main focus of the tutorial, but just so you know. So anyway, we need to definitely stick hash back quote here. And now we have several definitions. And because we have several definitions that we want to stick in one single form, we use begin just like we did with print before, just going to indent everything nicely. And of course, now we want to stick the result of evaluating this in here. So I've got to put hash comma here. Okay. And unfortunately, there's still another subtly. If you look at the expansion here, you can see that it would, well, hopefully you can see that it would expand into something like this, right? You have begin, you know, from here. And then you have the open paren, which is from the list produced by map. And then inside those open paren's are the definitions of the various lines, like, you know, line 10, line 20, and so on. And you can see that there's one pair of parentheses too many. So we really don't want the racket macro expander to stick that list in there. We want it to stick the elements of the list one by one inside the begin. And to do that, we need to put a magic character here, also sometimes pronounced unspicing. So and that does exactly what we want. So it's just like hash comma, but it expects a list and it will stick the elements of the list there effectively removing one pair of parentheses. So let's try that again. Oops. So here's that basic form. At least it doesn't error out anymore. Let's see if we can start it. And it says, Hello world. And we could also now extend it, right? And put another line here. So right now we only know print. So do that. And so we can start line 10. And you can see that it, well, line 10 runs, but really we expect the program to go on to line 20 after line 10. Line 20 is there, but we haven't yet linked the lines together. So that's going to be our next problem. So what we need to do here is, well, after the command, you know, associate with the line happens, we need to stick in a call to the next line, right? Now, how do we get that call, right? In order to produce the call for the next line, we need the line number of the next line. But we don't have it here. We just have the current line number. And to do that, we're just going to map over an additional list, which is going to be like the original list of line numbers, but we'll shift it by one. So along with the current line number, there's always the next line number. And to that end, I'm going to call the list function, CUDR, which just, you know, removes the first element from the list. And so that gives us that. And now, of course, map would complain because the lists do not no longer have the same length, right? This list is one shorter. And so we need to append an element to make up for the last element. And since there is no line after the last line, I'm just going to pass false, right? This is a racket syntax for false hash f. And we'll need to make sure that after the last line, we don't try to stick in a call. Okay. So here we are. We need to, of course, make sure that now that we're mapping over three lists and not just one, I need to add an additional parameter here, right? And now we could say call next line. And I could just, you know, stick in a call to format it like this, right? And then pass next line number. But that would be poor abstraction. And well, it's kind of copy paste code. So we could pull it out into its own function definition. That's a useful exercise. So I'm going to take that code, copy it one last time, stick it up here. Now there's two subtleties. Let me get rid of this obsolete comment here that we have to pay attention to. So I'm going to call that function, you know, make line name. And of course, we need to pass in the line number, this thing here. But we need to pass in one more thing because this basic here that you see here, remember that it referred to this basic down here, right? It used to be here. But that's no longer in scope because we're outside of its parentheses. So we also need to pass it in as an argument. And we'll call that, it's called the context, used by Rackett to make sure that the definitions appear in the right place and are referable by their name. Okay, so we have this great. And then we can take out those two calls here and call make line name instead. Remember that we now need to call this basic thing that was there before and line number. And the same thing over here. I'm sorry, it's called make line name, probably noticed. And we'll do the same thing here, call it make line name, remove all the crud here, let's duplicate it. And now we have a nice call. Of course, well, we'll get to the fact that it's not a call yet in a moment. But also, I want you to think about one more aspect of MaxRexpansion. This definition up here of make line name is just a regular procedure definition. But we want to call it from the macro expansion process. So essentially at compile time, we're a macro expansion time. But this is just a regular runtime function. It will not be available until runtime happens. And in order to tell Rackett, no, this is a function, please make it available to macro expansion. We need to replace the define by define for syntax. So define it. So it's available during syntax expansion. Okay, so one more thing. This is of course not a call yet. In order to have a call, we need to have parentheses to the parentheses around the name of the next line. So I'm going to put do this and put parentheses around it and then stick this in with hash comma, right. And that would be the call to the next line. Well, except it isn't quite because remember, next line number can be false, right. And then we would not have the then then we would not have a function to call there. So we really need to make a case distinction here. I'm going to say, well, if there is a next line number, then this is a great function to call. This is the function of the next line. And if there isn't, well, I'm just going to call void, which is a function that does absolutely nothing that's built into Rackett. Okay, let's try it out. Let's give it a whirl. Well, oh, it says next line number, reference to an unbound identifier. That's because I put a type over here. Let's try that again. Okay, now it at least goes through. Let's see, we had that program that had two lines in it, doesn't error out. That's great. Call line 10. And, ah, okay, now it runs both lines. That's wonderful. But we see that we made a small mistake in the implementation of print, at least in Apple South Basic, when you call print, it prints return a new line at the end. So why don't as a last act of this particular task, we go up here and stick up here, they called a new line. That should work. Try that again. And now we get the output as we've wanted it from the beginning. So now we can have a sort of a basic program that has several lines where the lines are run consecutively. That's great. So what's next? Well, here we can already see what might be next, which is the infamous go to command in basic, which just jumps to a to another place with no memory of where it came from, or what was going on beforehand. So if we look at, you know, our code here, we can see that go to is going to require special treatment. We can't just implement it the way that we implement it, that we implement print, because of course, when there's a go to, there should not be a call to the next line after the go to that would make no sense because go to just transfers control directly. Now, well, things are getting a little bit unwieldy at this point in this in this longest function here. So I will put the code that translates a single command into a separate function, and we'll see how that goes. So here, again, remember that we need to say define for syntax, so it's not a macro, it's a function that is called from a macro definition, going to call that function translate command. And well, so we pass in the, we pass in the actual command. So that's the, that's, if you will, the original source code from the basic program. And we need to translate that into a racket code. And well, and the idea is that we use translate command to produce the body of this function here. So, of course, that means we need to also pass in call next line. So let's see, let's pass that in. Great. And for now, we're just going to keep that very simple. We're just going to, going to keep the definition from up here. Now, again, remember that when we produce syntax, we can only produce a single form at a time. So there are two forms here, but we know we can always combine two forms into one by sticking a begin in front. So we'll just do the abstraction for now without adding additional functionality. So instead, we will say, okay, translate command, command, and call next line and omit this to a way with that, make sure that all the parentheses line up. Yeah, it's getting, getting to be a lot of parentheses and see if that works. So run that and, you know, program runs as before. That's great. So we want to give go to special treatment. And in order to do that, well, we need to inspect the command here and see, well, with, if it's a go to command, then we give it this treatment. In all other cases, we just translate to the code that we already have. So I call syntax powers, call pass the command. And now I put here a pattern where it says, well, if it's a go to command, looks like this, go to, there's a line number and that line number is an integer, then I'm going to do one thing. And in all other cases, I want to do something else. And you can see here that, you know, this is now a syntax parse that has several clauses, and it just tries them in order with the compiler. While it does syntax expansion, first tries the first one, then the second one, the first one that matches gets to be run. And this thing here is just a wildcard pattern that matches anything that you want. So that's the fall through clause at the end. Okay, so in this case, well, we have go to now, here's another subtly, in order, we really want the word go to to appear there. But remember that syntax parse does pattern matching. So whenever there's a name here, that name is just going to match whatever is there in the input. It's really a subtle issue. And we really want the word go to to appear here. And another subtle issue is that we want that to be consistent with the rest of the binding structure of the program. We'll ignore that here. But in order to do that, we need to put something like this, a magic clause here at the beginning that says, well, go to whenever it appears here, means that we really want the word go to to show up there. So for all of our specially treated commands, we need to put that in, we put them, you need to put the name of that command into the literals clause up here. Okay, so now, when there's a go to, we don't, you know, we just want to generate a call to the procedure that's associated with that line number. And we get the name of that procedure by saying make line name. Well, and remember, make line name now needs that context argument up here, and we don't have it here. So we need to add another parameter, call that context up here, we pass basic. And okay, got that here. And then we can call make line name with context and the line number. Okay, so got that here. And that's just the name of the procedure, we still need to make a piece of syntax with a call to it. So we put hash bang back quote, put a pair of parentheses. And in the middle, we need to put hash bang comma to actually stick that name there. Okay, well, let's see if it does anything. Oh, error message. Well, again, I fell into the same trap that I always fall into line number here is a pattern variable. So it really is only valid inside a hash bang back quote. Well, it is inside a hash bang back quote, but the hash bang comma kind of undoes that it's a, so we refer to it as a regular variable here, in order to refer to a pattern variable. Again, we need to put hash bang back quote here. So it must seem confusing at this point. But if you just do a bunch of stuff with macros, you will soon get the hang of that, I think, and then you'll recognize the error message right here, it says pattern variable cannot be used outside of a template. So line number is a pattern variable. Now we can see, oh, that nicely, that occurrence here of line number nicely refers to that one. Let's see if it runs now. Okay, see if we can do this. So, well, here's a rule that I forgot about, but fortunately, it's easy to remember because there's an error message says literal is unbound in phase zero, right? Phase zero has to do with the phase zero's runtime. Don't have to worry about that too much, but it just means that go to has no definition that it refers to. And we're not interested in a definition because we're translating calls to go to into something else anyway. So I'm just going to say put in a dummy definition here to false. And that just means if there's any reference to that go to, then, you know, syntax parse will recognize it. Let's push the wrong button there. Let me try again. And, well, at least it doesn't error out. Let's see if we can run the program. And that's pretty neat. And now, well, I guess it's time that I put an actual basic program here into the source code so I don't have to keep retyping it like this. You know, like this, for example, and how are we going to see that it works? I'm just going to put a line 20 there that says go to 40, make that line 30, and put line 40 which says, you know, the end, right, so that we know the program is done. Let's see if that works. We could call line 10. And indeed, it says how the world the end and it skips hello again. So go to works now. All right, what's next? Well, we can just keep adding commands, right? But of course, some cause more trouble than others. And one that causes a little bit of trouble is assignment, right? We can have variables in basic might look like, well, in basic it would look like this, right? A equals 42. And, you know, that would establish a variable called a, and we could then use it, right? We could say something like this. And I'm going to use a syntax slightly different. So of course, it has to be parenthesized, the operator has to be in front. So might look like this. And I'm going to use colon equals to avoid confusion with the with the equals sign without a colon, which is the equality operator. Okay. So we'll just use the same technique that we used before. So I'm going to just like go to right needs to be a literal, we want another case to match only when there's an actual colon equals sign there. Well, here's a variable. And that has to be an identifier. That's how we talk a tell syntax parts that needs to be an identifier. And then there's a right hand side, which can be any expression that we want. Okay. So got this here. And now we just need to generate a piece of code that works with it, right? And in, in racket, variable assignment happens with an operator that's called set bang. So the exclamation mark is pronounced bang, to denote that there's an evil side effect happening here. So, and we can just use these two pattern variables, we can just use the right hand side here. And, okay, well, remember that trick that we needed here with go to we need it with colon equals to so we need to provide a dummy definition. And now, well, here's a little problem, right? It says, a is an unbound identifier down here. And that's because, well, racket distinguishes between binding an identifier. So you're establishing that the identifier exists and then changing the value in the storage cell associated with that identifier. Those are two separate things, right? And so we could make that error message go away by saying, you know, define a to some dummy value. But then of course, that's no longer basic, right? It's, if we would need that additional definition in basic, we can just introduce an identifier by doing an assignment to it. So somehow we need to generate the set of variables that appear in assignments and make sure that we also generate definitions for those identifiers, dummy definitions at the beginning. So, well, we need to generate those definitions. And for generating those definitions, we need the set of variables that are in a program. We'll start by just collecting the variables that are in a single command. And to that end, I'm just going to define a new function here, call that collect variables. And I'm going to pass in the command. Okay. And well, not just the command, but we will want to collect a list or rather a set of those variables, because of course, a variable might occur in several assignments. So we will use something called an it set. And we'll define that in a minute. So a racket comes with a library that allows you to manage sets of these identifiers. And we will need to import that library up here. So call that syntax, that's called syntax it set. And that allows us to deal with those sets. Okay. And of course, we need to look inside those commands. So we're just going to use the same syntax part essentially the same syntax parts that we had up here, right? So we have various literals here with go to, of course, there can't be any variables introduced by a go to, we're mainly interested in the clause that has to do with assignments. And well, how does that work? So in that case, what we need to do is we need to say, free it set at and it set and add the variable here. And anything else, well, we're not going to do anything. I'm just going to call void the function, the built in function that doesn't do anything. Okay. Okay. And now we need to call that function collect variables for each command. And we need to do that up here, from the basic form, right, where we have all the commands at our disposal. So first of all, we need to create an empty it set that we can add to call it it set. And so we're going to call a function, it's got a cumbersome name mutable, it's set. Oh, and not just that mutable free it set, because again, racket makes subtle distinction between identifiers, depending on whether they're free or bound, don't worry about that. This, this is going to work. And then what we do is, well, we want to call collect variables on each command and passing in that it set. And but remember, we don't just have a single command, we have a whole list of them. And so we need to do this. Well, we take the command dot dot dot, and of course, need to wrap that hash back around it, convert that into a list, we've done that before. And then we can use a built in function called for each, which just calls this function on every argument on every element of that list, right? It's sort of like map, but that returns your list of the result, whereas for each just doesn't bother with the results and throws them away. So now we have a set of identifiers. We just need to generate definitions from it. And these definitions need to go here, right, before the actual code for the basic program. So again, we can use map, right, to map all over all the variables. We'll figure out in a moment what we're going to do with those. And well, we'd like to map over it set. Unfortunately, it set is not a list and map once a list. So we are going to use a function called free, it's set to list to make it a list, right? And so for each variable, we want to generate a dummy definition. Again, we need to generate a piece of syntax says define, we stick that variable in, and we'll just define it to false to make sure that things don't get mixed up with regular basic values. Okay, now remember, again, I mean, you can see here the other called a map, right, we want to make sure that this gets spliced into that form here. So again, I call that I use that same funny magic thing here, hash comma at, which splices all those definitions into that begin form. Okay, well, let's see if that works. And now it says, oh, that variable is already defined. That's a good sign, because you can't have several definitions for a single variable. So we put one in ourselves. I'm going to delete that. And okay. And now we can see that there's a variable here, right? It's bound to false as it should be. And let's try running the program. Well, not much happens. Why is that, right? Why does the program not produce anything? Well, we could look at a is now 42. Let's see, let's go up here. And we can see here that translation of colon equals and you see, you can see I forgot something. This just sets the variable, but it does not go on with the program as the rest does. We need to put a begin here and then put in the call to the next line. Let's see if that works. And now line five. And that actually prints out the value of that variable. So now we've got variables in our program. That's wonderful. Next up are conditionals in basic. So how does that work? Well, basic has a statement called the if statement might look like this, right? We compare a with 42. It's kind of silly. We know what the result is, but just to demonstrate the feature. And then depending on whether that's true or false, we could whatever said B to one or set B to two. And we'll just print that out along with the A here, just so we know what came out. All right, well, we need to translate the if command, of course. So we need to go up here into our translate command function. We need to add if here to the list of literals. So we can recognize ifs. And it looks like this. Well, we have some tests. That's an expression. We have a then branch. That's an expression. And we have an else branch. That's an expression. And well, we're just going to translate it into brackets if, but we need to make sure that the then branch and the else branch also get translated. So we use hashback quote. Again, you know, this is not the same if as here. This is now brackets if we're just going to leave test as it is. And then we call translate command. And remember, we need we want the result of translate command stuck in there. So we need to put hash comma in front, pass in some context. You know, again, you know, we need to pass pattern variable here. So we need to put hashback quote in front and we'll pass call next line as before. And do the same thing here, translate command context and call call next line. Okay, well, let's try running it. That doesn't work, right? We get an error message says B is unbound. Why is that? Well, we're generating definitions for all the variables that occur. But we but so far, we're only looking at assignments that occur at the top level, we're not looking inside an if so we need to make sure that we also extend collect variables in the same way that we extended translate command. So we also need to add that to the list of literals here and just put the same, we can just copy the pattern from up here. Copy that here. And well, we just need to call collect variables recursively on the two branches, right? So we call collect variables on then with the it set. And we call collect variables with the else on the it set. Well, let's see if it gets any better. Well, at least it doesn't air out. Let's show up our program again, we could call it starting at line number five. And that's not bad. So we now see the it says Princeton number one, which comes from this assignment here, right? And so B is assigned to one if A is 42, when A is indeed 42, we could also set it to 41. Try that again, line five. And we see that it now prints a two because the branch went the other way. I'll just change that back and save. Great. So we'll add one more feature. That's actually not just one command, but we'll need three commands. And that is basic sub routines. So what could that look like? For example, in line 1000, we could say we're just going to print the B variable. And then in line 1010, we just put a return statement. So Applesoft basic doesn't have functions in the way that we know them, or methods, it just has these sub routines. And so we could call, we could output B by just saying, you know, whatever line eight here, and we could say, go sub line 1000 like this. And the idea is then that it will go and print B, right? And return and go on after that with Hello World and whatnot. We could also call it several times, of course. Okay. And yeah, we'll just leave it like that, two commands. Okay, so go sub and return. What should we do? Well, we go up here. And obviously, we need to do something with translate command, we will just add go sub and return. And these are not built into rackets. So we need to have go sub false dummy definitions for them to do that. And so how does that work? Well, go sub is kind of like go to accept the program goes on, right? So while there's go sub line number, which is an integer. And of course, well, we need to generate a call, just as we do with go to, right? So going to put on just going to duplicate that from up here. And the only difference is after that call is done, right? When that finishes, then after that is done, we need to continue with the next line. Unfortunately, we know how to do that because we've got the call to the next line sitting up here. So I just stick call next line here. So that's the only difference between go to and go sub. And of course, we also need to implement return. And well, what happens with return? With return, nothing happens. Also return is similar to go to and goes up and that it doesn't go to the next line, it just returns. And when there's an active go to, it will just go back to the line after that, it will go to that call to call next line here. Let's see if that works. Okay, so at least we don't get an error message line five. And well, how do we interpret that output? I kind of forgot what the program was. Well, here it said goes up 1000, it printed B, and then it went on with the program as before. Arguably, the program should finish here after line 40 and before line 1000, but I'll leave that for you as an exercise to implement. So now we have enough basic there to at least give us a fair idea of how to do this whole thing and maybe how to add the rest. And I promise that we would also get actual basic syntax and we only have a few minutes left. So let's get to it. In order to do that, we need to make sure that the functionality that we've implemented in this file is available in other files. And so we need to add what's called a provide form. That's like an export or public annotation or something like that. And we just need to provide the basic form that we implemented, the print command that we implemented, go to, you know, all those things that we added to implement our basic syntax return. There we go and save that file and then we're done with that part. Now for the actual basic syntax, there's no way around writing a proper parser and we don't have nearly enough time to do that. It's not that hard, but it takes a little bit of time. So I've prepared the code to do that. It's just called a reader. So here's a file called basic reader dot racket. And if I run that, well, you can see here, for example, there's a function called parse line takes two arguments, one of them is called Zerts. That's just context information about what file this code resides in and so on. I'm just going to pass false here. Later that value will be provided by the racket system. And the other argument is an input port, which is just a reference to an open file or something like that. But we can also use a string by just saying open input string here. So if I say, you know, 10 print a plus one and call that function, you can see that it returns list structure that corresponds exactly to, you know, the syntax that we, the parenthesis syntax that we implemented earlier. We just need to hook this up to the rest of the racket system. And for that, we need to write a little boilerplate function going to call that basic read syntax also takes a Zerts and an in argument. And that function calls parse program, which does the same thing as parse line, but for an entire program do this. And we want to embed the result of that in something that racket will later understand. So do that. And we need to create a module declaration for a racket. I'm just going to call that module basic. My name doesn't particularly matter. It imports the racket language. And it also requires so it imports the code that we've just written, which is in the racket dot arc, KT file. And now here's the result of parsing the program, we still need to put a basic form around it and splice in the result of parsing the program. So that's that. Now you might have noticed that the back quote up here and the comma at here doesn't have a hash with it. It doesn't create syntax structure, it just creates an ordinary racket list structure. We still need to convert it to a syntax object. And for that, we use a function called datum to syntax and just pass false here for the context. That's enough. Make sure that all the parentheses line up and then we're good to go. Well, almost good to go. We still need to export this function and we need to have a different slightly different name for it. We're just going to call it read syntax on export. And that's a magic name that racket will later recognize. So I'll save that file and then we are good to go. So in order to try this out, I've prepared another little example file called basic demo syntax dot rkt. This is what it looks like. And you can see the hash line line up here declares that we want to use the reader that we just implemented. Basic reader dot rkt. And what follows is a very much Apple Soft looking program. And I can run that by just typing line 10, like we used to do that. And we can see that it pretty much works. So there you have it. I mean Apple Soft is just about as different from usual racket as it can be. And so if you can do that, then you can implement just about any language you like with the racket system. And it's not just for toy stuff. It's a great tool for organizing your software architecture for just generally generating pleasant notation for doing things like documentation in the racket system. There's lots of publications and lots of examples on that. And of course, we've implemented the same. We've used the same machinery to also implement teaching languages and to iterate quickly, improving those languages. So I hope this has motivated you to try out the racket system. Again, please check out the GitHub repository if you want to look at the code or send me email or contact me some other ways. If you're interested in this stuff, and I'm looking forward to your question and a little bit of discussion. We'll see. Thank you. Thank you very much, Mike, for the talk and for introducing us to racket. And to show us how to implement a language there. I already saw there was already quite some discussion in the chat, but we have still some questions for everyone. And we start with a question from someone. And the question is why you use these macros instead of simple functions? Because he thinks you can also use just functions to produce print. Yeah, so that question was asked just after I introduced print, right? So it doesn't refer to all the rest that we did there. And indeed print could be a simple function. But indeed, there's a lot of things that can just be functions that don't need to be macros in racket. But and print is one of them. But then I couldn't have used it to demo a single simple macro, right? All the other macros that I introduced are a lot more complicated. So I figured I'd do a simple one first. Very good. Let's go on. There are all the differences between bindings and operators. It would be, uh, entrepreneurs to quote unquote, it would be something I wouldn't say. It would be great if you showed them more in detail, but that's more common. Yeah. So it is a little bit confusing that there's two kinds of variables, right? And I hope throughout the talk, there's enough of them that you can see what the difference is. The reason for that is in that this is sort of an expert level macro system that we're using there. And the simple macro system that's in racket, the construct for that is called syntax rules, knows only pattern variables. So if you deal only with syntax rules, then it's very simple. But if you want to write more complicated macros like the ones that we did, then you naturally sort of incur both the pattern variables and the regular variables. But there's only these two kinds, right? And you get, you get the hang of it by playing around with it. So then there's a comment by senior on who says all the switching between a hash, a back quote and hash kind of reminds her of existing and entering math mode in Latich. Yeah, that's a great analogy, I think, right? In Latech, when you put a dollar sign, you enter this different syntax, this different world there, and then you can exit that again by writing, you know, backslash text RM or something like that. That's exactly the same idea that you have in racket, right? Is that you enter the syntax construction mode and that you exit it to regular code, and then you can re enter it. So you kind of can pop, pop, pop, push, push, you know, like the like the infinite turtles, I guess. The next question is, if there is, if you have control over the runtime of your language within racket? Yeah, so I see the question goes on a little bit that suggests that memory management might be a subject. So, so racket gives you great control over the runtime because it has one of the most flexible runtimes in existence, it might be the most flexible one in existence, specifically when it comes to control structures. So, so macros is one aspect of the flexibility of racket. Another one is that it has something called the limited control, which allows you to implement exceptions to implement threading, all kinds of things in terms of the primitive constructs of the language. It does not give you control over memory management directly. I mean, you can influence the way that it deals with memory a little bit. It does not give you control whether it's interpreted or compiled, but if you want an interpreted language, you just write an interpreter. So I guess it gets you control over that. But there's a great many examples that come with a racket language and its package ecosystem of different languages. So one example is that there's a Haskell like language, for example, that comes the only difference is that it has parentheses. Or I think the racket based system comes with a language that does functional reactive programming, where each value changes over time, you can see a change live in the rebel. The next question is from the coder, did you implement a whole basic interpreter in racket and how does it compare in speed with a more traditional interpreter? So well, I implemented a bit more than I showed, but I didn't put all the features of AlphaSoft basic and the graphics in there. You notice because it's implemented in macros, though, it is a compiler effectively. So again, it's compiled into a racket code. And I think it would compare quite favorably because racket has a pretty efficient compiler in the backseat there. So it's not apples to oranges, it's kind of apples to 40 year old prunes. And racket obviously doesn't run on an Apple too. But I think the Apple, the AppleSoft and the basic implementation that you're getting there is going to run quite fast, actually. The next question is from Proless. Do you think Racket with its extensive macro system is particularly well suited for building languages? So it is the best system in the world to build languages, right? And so in my commercial work, right, whenever we need to design a DSL, then Racket is at least involved in the prototyping stage. And it could very well be involved in the production stage as well. Specifically, you've noticed that Racket is a Lisp descendant, I guess in many ways. And of course, if you get a more traditional Lisp like Clojure, it also has a macro system. But the macro systems of traditional Lisp are significantly less powerful than the one in Racket. And there's two differences that are relevant. One of them is that the syntax and traditional list often doesn't track source code information. So you don't really get good error messages, which are important if you really have domain specific languages that are exposed to domain experts. And the other one is that it deals with hygiene. We didn't talk about this in the talk because there's lots of other talks and papers on the topic of hygiene. But you know that sometimes when you write a macro, you need to invent new names that are just supposed to be temporary. And doing that requires extreme care in a traditional Lisp system, whereas in Racket, it's just automatic. And so pretty much the subtleties that I've shown, that's a pretty complete set. I don't think I deal with more subtleties on a day-to-day basis. And in a traditional Lisp system, you would also need to deal with hygiene issues, which you don't in Racket. So that gets you sort of one level of power above traditional Lisp in terms of how funky you can get with your DSL.
All languages have warts, wats, defects, and things that are just plain bad taste. Unless, of course, it's your language. Usually, implementing and maintaining your own language is a lot of work, but the Racket programming system makes creating a language as easy as having breakfast (almost) and thus a routine activity. Pick and assemble what you like from other languages, sprinkle your own favorite features, and voila - there's your fun experiment, your medium of expression, your educational language, your DSL. Racket's secret is its flexible syntax (hailing from Lisp), and its world-class macro system which is both super-powerful and easy to learn and use. (Going far beyond classic Lisp.) What's great is that you don't have to do all the work: Languages are just libraries of macros in Racket, and they can seamlessly interoperate with the Racket base language and each other. Don't worry if you dislike parentheses: Racket has you covered there, too.
10.5446/51826 (DOI)
Rwy'n cael ardu'r amser i'r f narrative Planning. Felly, dwayf yn ddysgu mor hanfyrdd n боod! I fydd yn det datodol i'n amser efall inhwg, ond efallai bod chi efallai oedd eraf interior. Gael o holl iair o'n ymddannos mewn gael, mae姵r ardim yn y ddatesby naturallyd,wrth f peinech i'r b Baz. Roedd am gwneud amlineish og, a'n wneud o wneud bod hwnnw o'r peir. Dadgerwch am fawr i ddig wickedigfyrdd i gynhyrchr newid y tawur review Aculos o edrych i fyny rau wnaeth amser iethaf yr coming oherwydd rydyn ni rhywbeth ar告诉on cre<|ko|><|transcribe|> Te bwrs cymр fak<|zh|><|transcribe|> Leah Benoit feledd yn ymy yog oherwydd Nearly Cross is un rankwlad yng Nghymru fel Dos品 am y rhai cy provided byddwch i ymddangos, ostai maeเฉrenau gwahanol am dyn ondd am yn richesr Sarah sydd아요,iddio mae rhain cyfigol Llywodraeth eight Bo chwy τον yn cysylltu yn cael ei awtŷol ac benwrarlodd a dweud Nab improve sydd mwy wneud oherty peera rodol Ond we're going to cover quite a lot of material quite quickly today, so what I want to do is just make you aware of some resources you can use after this talk, should you want to follow up on any of this. So first is my book cryptography.net succinctly. The book's completely free, I'm not trying to sell you anything. You just go to Syncfusion website, sign up and download it. That mirror is what we're going to be talking about today. I also have the course practicalcryptography.net on PluralSites, and that covers what we're going over today, but it goes into a lot more detail when it talks about the why as well as the how of how to do a lot of this stuff. If you don't have a PluralSite subscription, I have some cards which gives you one month free access, so feel free to come up and grab one after the talk. OK, so I strongly believe that as developers is your responsibility to help secure and protect your company's data. You just have to look at the news each day or each week and you hear new data breaches that are happening. It's getting quite serious, so taking responsibility for your company's data is more important than ever. Typically companies tend to make lots of excuses, but I've heard a lot of these excuses in companies I've worked at. We're too small to be hacked, no-one's going to bother with us. That's not necessarily true, it just means you're potentially an easier target. We have a firewall, no-one's going to get through that and get to our data. It's great that you've got firewalls in place, but what about your internal operations stuff if you have any disgruntled employees who can get access to your data and try and steal it? I've heard one before as well, we've never been hacked before, so why should we take this seriously? Just because you've not been hacked before doesn't mean you won't be in the future. I truly believe that hope is not a strategy when it comes to security, hoping that you won't get attacked or having any of your data stolen. It's not a good strategy for success. How many people have worked in a company where you've got a deadline looming and you've got lots of features you need to get in, and in security you just get pushed further and further down to list of priorities to get product shipped? Has anyone had that scenario before? I think I've had it in just about every company I've worked at. As a developer, it's your responsibility to try and push on these things and to try and stress how important it is to make sure security is not pushed to the bottom of the pile. What this talk isn't about, it's not about deep mathematics or how a lot of these algorithms work. We would not be able to cover that in an hour and it's nine o'clock in the morning, I'm sure nobody wants to do lots of complex maths. This talk also isn't about cryptanalysis. Cryptanalysis is the art and science of breaking codes, so the sort of things that our governments probably spend a lot of their time doing every day. That's not what this talk is about. What this talk is about is about people like you and me who are regular developers, we work for companies and we produce code day in, day out to provide value to our customers. A lot of the code we're going to look at today and talk about is based around the.NET framework, or as you can now call it, the traditional.NET framework. So, server side, web APIs, WCF services, or client code, so wind forms, WPF applications, all that sort of thing. Just because we're talking about Microsoft APIs specifically in this talk, a lot of the principles we're going to discuss are relevant across any platform. The principles are all the same. The APIs might be slightly different, but what you're trying to achieve is effectively the same across different languages. Java, Ruby, PHP, Python, Node, the principles are exactly the same. What we're going to cover, we're first going to talk about random numbers and why they're so important. We're then going to take a look at hashing and hash message authentication codes. We'll then take a deeper look at secure password storage and password management. We'll then take a look at symmetric encryption, and then we'll look at asymmetric encryption with things like RSA, and then we'll look at digital signatures. That will give us a lot of the building blocks that we need to go ahead and build what is called a hybrid encryption scheme. It's using a lot of these building blocks together to create something more powerful. What is cryptography? I think it would be a good idea to cover this, especially if any people have accidentally walked into the wrong room and are too embarrassed to walk out. Cryptography is basically about protecting information, and generally that's done via encryption. When you encrypt data, you have encryption keys, and that encrypted data then becomes what's called ciphertext. This is generally what we're talking about with cryptography. The art of trying to break codes and work out keys to decrypt data is called cryptanalysis. There's more to cryptography than just encryption, so there's four distinct pillars that we look at with cryptography. There's confidentiality. This is what we all typically think of with cryptography. I have some data. I encrypt it with a key, and that data is completely scrambled and no one can read it. We have the concept of integrity. If I have some data and I send it to my recipient, has that data been tampered with or corrupted in transit? We can use cryptography to help us with data integrity. We also have authentication. I have some encrypted data. Am I allowed to view this data? Am I authenticated to see it? I also have your non-repudiation, and this is all about proving that you have sent an encrypted message. It's a similar analogy to a contract. If I send a contract to someone and they then try and dispute that I sent them that contract, by using non-repudiation, we can actually prove that it was us that sent that contract to them. Cryptography is pretty much everywhere. You can't switch on a device or do anything about any kind of cryptography being in place. Online shopping, if you're buying stuff off Amazon or any of your other favourite websites, you have the little padlock in the browser. ATM machines, when you're drawing out cash from the wall, there's a cryptographic handshake that goes on when you put your pin number in between you and the bank. Mobile phones these days, it's obviously a very hot topic at the moment, especially what's been going on with the FBI. We're trying to break into the San Bernardino killer's iPhone. A lot of these phones these days are heavily encrypted. Also, there's uses like Bitcoin. Bitcoin, as a currency, is a cryptographic protocol. That's another use for it. Another example of cryptography is in voting and voting machines, proving that you've only voted once. Using cryptography so that you can't cheat the voting system. Let's start off by looking at random numbers. Random numbers are effectively one of the most important primitives that we need when we're dealing with cryptography. We use random numbers generally for creating encryption keys. A good random number needs to be truly random and unpredictable. Traditionally, in.net, when you're doing random number generation, you might use something like system.random. That's okay if I try to do something simple like simulating a dice roll or lottery numbers, for example. But when you're trying to generate random numbers for cryptographic keys, system.random is not good enough and it's also not thread safe. System.random gives you the appearance of randomness, but actually it's very deterministic. If you don't change the seed every time when you create a random number, you'll get the same set of numbers out of it. If you're cryptography, it's no good. In.net, there's a better class called RNG Crypto Service Provider. This leaves, along with everything else we're talking about, in the system.security.cryptography namespace. RNG Crypto Service Provider is a lot slower to run in system.random, but the numbers you're going to get out of it are going to be nondeterministic, which makes it excellent for generating encryption keys. You'll see examples as we go through the talk where we generate 256 bits or 32 bytes random numbers which we use as keys. RNG Crypto Service Provider is not all-implemented in.net. It actually uses the underlying cryptographic platform in Windows. The same sort of things that you're using, the C++ or the operating system libraries. RNG Crypto Service Provider is very easy to use and this will be a common theme. Everything we're talking about today is actually very easy to use. In the little sample code here, we have a method called generate random number, but we pass in a length, so that length is the number of bytes we want to generate. If you want a 32-byte random number, you pass 32 into there. We create an instance of the Crypto Service Provider class, and then we initialise a new array to the correct length that we want, and then we just call getBytes, and then we return that byte array. Actually generating our encryption keys is as simple as those few lines of code. Moving on to the next part in our stack of primitives that we want to look at, we have hashing. The hashing, you can think of it as a bit like a digital fingerprint of a piece of data. If you have a piece of data, a byte array of data, or a PDF document, etc., if you generate a hash code, you're going to get a code at the end of it, which is effectively the fingerprint for that piece of data. If you go and change that original document in any way, and then recalculate the hash code, that hash will be completely different. With hashing, there's four requirements that we need from hashing. First of all, a hash needs to be easy to compute. I have a piece of data, run it through a hashing function, I get a hash code at the other end. It should also be infeasible to generate a specific hash. You shouldn't be able to say, if I have this hash code here, what's the data I need to create that hash? You shouldn't be able to do that. It's the other way around. You have a piece of data, you generate a hash code, or you run it through a hash function, and you generate a hash code. Another requirement is it should be infeasible to modify the original message without changing the hash. As I said before, if you have a piece of data, generate a hash code, and then just change just one bit of that data, that hash code should be completely different. Not slightly different, but completely different. The final requirement of a good hashing algorithm is it should be infeasible to find two identical hashes. You shouldn't be able to get one piece of data, generate a hash code, have a second piece of data, and generate exactly the same hash code. That's called a hash collision, and you shouldn't be able to do that. Hashing is what we call a one-way operation. Once you generate a hash code, you can't then, or you shouldn't be able to then go back to the original message. Whereas encryption, as you can imagine, is more like a two-way operation. We encrypt a piece of data up with a key, but then we can use the same key to decrypt that data, so it's two-way, it's reversible. Whereas hashing is only one-way. The most common hashing algorithm that people have probably heard of is MD5, and it's been around for a long time, well, since 1991. What this does is it produces a 16-byte hash value, and it was designed by a guy called Ron Rivest. The problem with this is, in 1996, a hash collision resistant vulnerability was generated, so someone managed to generate the same hash with different values or different pieces of data being passed into it. MD5 is a hashing algorithm these days, isn't really good enough to use, but I still mention it here because if you work in a larger organisation, like a bank, for example, you may still have a lot of older legacy systems you need to integrate with, and they may still use MD5. So, in example of that, I used to work for an internet bank in the UK, and our back-end banking platform was an old AS400 mainframe system, and whenever we sent messages to and from that system, we had to generate MD5 hash codes. So, it is possible you still need to use them, but for a new system, you wouldn't want to use MD5. OK, so moving on from MD5 then, we have the SecureHash family, or the Shah family of hashes. There's three versions of this. There's Shah1, which generates a 160-bit hash function, and then there's Shah2, which can generate 256-bit or 512-bit hashes, and those two are both implemented in the.NET framework. But there is also a new one called Shah3, which is now available. So, Shah1 and Shah2 were both designed by the National Security Agency in the United States, and rightly or wrongly, that makes some people a little bit nervous. So, there was a competition a while ago, and the winner was announced in 2012 to find a new variant of the Shah algorithm, which is non-NSA-based. And the winner of that was an algorithm called, I always get this wrongly, is it KAKAK or CESAC? I'm not quite sure how you pronounce it. But currently, this isn't implemented in the.NET framework, but you can get some open source implementations of it, whether you want to trust them or not. It's kind of up to you. I imagine that it would be a matter of time before Markshoff implements it in the framework. But before we're going to talk about today, we're going to look at Shah2 and Shah256. So, it's very easy to use. So, in our little method here, we're passing a byte array, which is our data that we want to generate a hash for. And then we call the static method create on the Shah256 object. And then you just call compute hash whilst passing in the data you want to hash, and then you get a byte array back, which is your hash code. So, again, it's very, very easy to use. So, moving on from hashing to the next level, we have what are called authenticated hashes, or hash message authentication codes, or HMAX, as they're often called. And conceptually, this is exactly the same as a Shah256 hash. So, you pass some data in, you get a hash code out, but what's different is you can also have a key which you pass in when you create the hash. And what this does is it means that if I then send that hash to someone else, there are any one of you, if you're to be able to recalculate that same hash, you need to have that key. So, this is where the idea of authentication comes in. You can only generate that same hash if you have the key. So, it's commonly used for both verified integrity and authentication, and you can use both MD5 and the Shah family of hashes, or hashmax. And the strength of this is based on the key. If you use a good strong key, which is, say, 32 bytes long, 256 bits, it's going to be quite difficult for someone to then go and brute-force that same key. And the most common attack against this type of hashing algorithm is a brute-force attack, but, as I said, if you use a good strong key, it makes it quite hard to do. So, again, using a hashmax, very, very easy to use. So, we have two pieces of information that are passing into our method here. So, we have a byte array of the data to be hashed, and we have a byte array, which is our key. So, that key was generated using the RNG crypto service provider. So, we create an instance of the hmax.sh256 object whilst passing the key into the constructor, and then you just simply call compute hash passing in the data you want to hash, and you get hash code back. So, again, very, very easy to use. OK, so next up, we want to talk about passwords, and there's various different ways in which you can manage passwords, ranging from not very good up to excellent. So, the first one, we'll just get it out of the way and then move on, is storing plaintext passwords. You know, I don't need to spend much time on this. I'm sure everyone knows that that is completely wrong. Well, there are still a lot of sites out there that do this. But you never store plaintext password in your database. So, the next best thing is to hash a password. And the way this works is you would say you have a person logging on or signing up to a system they type their password in, you create a hash, say a shard256 hash of that password, and then store it in the database. Then the next time they come and log on, they type their password in, a hash is generated on the client, and it's then compared against a hash in the database. If they match, you put the correct password in. But there's a problem with this, and that problem is that you can either brute-force those passwords by trying lots and lots of different combinations, or you can use what's called a dictionary or a rainbow table attack, which is a massive pre-computed database of passwords and different password combinations, even the clever things where you try and turn the vowels into numbers to out-of-fox people, you know. All that sort of stuff will be in there. And a way a lot of these attacks work is using tools like hashCats. You can use your GPUs in your computer, your graphics processing units, to actually do billions of hash attempts per second. So if you imagine if you've got a big, powerful machine with two NVIDIA GTX1080s in there, you imagine how many hashes you can do per second. So to give an example of how easy a hash password is to crack, there's a screenshot of a website here called CrackStation.net. So in the grey box on the left, I've pasted a hash code in there, which is a SHA256 hash. You click Crack Hash, and it's worked out that the password is Secret69. I mean, it's a very simple example, but conceptually that's how a lot of these sites work. And you can, you know, if you've got an MD5 hash, you can just paste it into Google and it will reverse it for you. Seriously, give it a try. It's quite scary. So has anyone ever worked on a system then where you've used hashing to store passwords in a database? You know, I've worked on systems that have done it before. I think pretty much everyone has. So what's the next best thing that you can do that's sort of the next level on from that? So you can do what is called Assaulted Hash. And what Assaulted Hash is, is the password plus Assault Value. Assault Value is just an arbitrary random piece of data, which generally, you know, you can, it's another random number which you generate with RNG cryptoservice provider. And then you append that onto your password, and then you create a hash of that password and the salt. And this is good. It means it's much, much harder, or probably at the moment impossible, to try and brute force any of these passwords. So that's great. Has anyone done this in any systems? Yeah, you know, again, this is quite a common way of doing it. And you know, there's nothing wrong with that, but the problem with this is, is as GPUs and processors increase over time, you know, a password, a salted password which is secure now, who's to say it might be vulnerable in five years' time? You just don't know. And this is a problem with Moore's Law. You know, processor speeds and GPU speeds are increasing like that. So it's only a matter of time before someone comes out of a GPU which is capable of cracking a salted hash. So what we want to do is we want to go one step further, and we want to try and mitigate this problem of trying billions of hash attempts per second. So the next best thing, and sort of the recommended thing to use, is what's called a password-based key derivation function. Or if you want to impress your friends down the pub, it's a PBK DF2. If you like acronyms. Again, this is the same as what we've been talking about. So we have a password that we want to hash, we have a salt. But what we also have here is a number of iterations number that we pass in. And what this is, is it tells the algorithm how many times to rehash that password. And the reason this is good is, you know, at the moment, if you can say test, you know, 2 billion combinations per second, if you have enough iterations on your password-based key derivation function, you might reduce that down to the fact that you can only test, say, 10 per second or 2 per second, depending on what you pass in there. And I'll show you a graph in a moment of what the different speeds increases look like. So first of all, we'll show you how to use it. We have our method here. We're passing in a byte array of our data to be hashed, just as before. A byte array of our salt. So it's, you know, a 32 byte random number of just sort of junk that you append onto the password. And we have a number of iterations. Now, the class in.NET framework you want to use is called RFC 2898 derived bytes. So you'll be forgiven for overlooking that one in the framework, because it's not obvious what it does. And under the bonnet or under the covers, the way RFC 2898 works, it uses char1 to do its hashing, which means you get a 20 byte hash value out of it. So when I call get bytes, I only really need to get the first 20 bytes for that hash value. So if you look at the chart here, so when I first created this chart, I was using an older laptop, but I tested some hashes. So 100 iterations, it took 2 milliseconds to hash a password. 1000 iterations took 16 milliseconds. 10,000 iterations took 196 milliseconds, and you can see it sort of scales up. And when I did 500,000 iterations, it took 7 seconds to hash a password. Now, the value you put in there is a trade-off. You have to look at what you're using the hash for and what the speed implications are going to be for you. So if you have a good robust websites, you may notice sometimes when you put the password in, there might be a bit of a delay as you log in. That's probably because they're doing a password-based key derivation function call behind the scenes. So systems I've worked on, I've typically used anywhere between 50 and 100,000 iterations to hash a password for logging in the system, because that kind of natural delay is kind of OK. Well, I think it's OK. But if you're hashing data on something that's sort of high-speed transactional, then 50,000 iterations would be too slow. So you need to think about the trade-offs of how many iterations you want and what the speed penalties are going to be. So whilst we're on the subject of passwords, this company's been bought up several times while we're here. It's fun to talk about. There's a...a ticket everyone's heard of this. Everyone saw Troy's keynote yesterday. So one of the things that happened when Ashley Madison were hacked is the password tables were all stolen. But Ashley Madison had actually been quite good. So they'd used something called B-crypts to encrypt their passwords. Now, B-crypts is something that's very similar to the password-based key derivation function. It's an iteration-based hash function. It's just a different type of implementation. So they'd used this across their passwords, and the attackers tried to recover a lot of the passwords, and they couldn't. So that's good. But they then also had access to the source code, which had stolen. So what they'd found was that some unwitting programmer had probably tried to optimise the logging-in system. I'm not quite sure what their motive was. But they'd started storing a local token of the password and the username, which they then MD5 hashed. So I think the thing about what it was is, you know, when you come back to re-log into the system, it will re-log you in quicker. So they probably thought they were doing something good, you know, making the re-logging in process quicker. So when the hackers found this out, they were like, well, let's not attack the B-crypt passwords. Let's attack the MD5 hashes. So they did, and they managed to recover, I think, about 10 million passwords from the system. So the reason why I'm saying this story is security is only as good as your weakest link. So their password management generally was pretty good. They used B-crypt to store their passwords, but they had a weak link in the chain where they were storing this token with MD5 hash passwords, which meant all the good stuff they'd done with B-crypt was basically void at that point. So there's a really good article on Ars Technica, so I've put a bitly link there, which goes into that story in a lot more detail. And it's quite an entertaining read. I definitely recommend reading it. OK, so let's move on to encryption. So first of all, we're going to talk about symmetric encryption. And what this is, is you have some plain text data, and you encrypt it with a key, which gives you your cipher text data. But then to decrypt the message, you decrypt it with the same key. That's why it's symmetric. So you use the same key to encrypt and decrypt. But there is a drawback to symmetric encryption, and that is that sharing keys is very difficult to do. So if I encrypt some data and I want to send that data to, say, five of you in the audience, how do we share that key? I mean, I can't email it to you if that's a bit of vulnerability. I can't just put it on the network somewhere. Maybe I could meet you all in person and hand it to you. So key sharing in Germany is quite hard to do. And one of the things we're going to talk about later is how to mitigate the complexities of key sharing. So this is a diagram we looked at earlier. So where we were saying that hashing is a one-way function. This reiterates the point that encryption is a two-way operation. So you have some data, you encrypt it, but you can also reverse that operation and get your data back. OK, so a way symmetric encryption works is it works by getting the data you want to encrypt, and it chops it up into blocks, and it encrypts several bytes at a time. And these blocks are padded to the same size. If you have some data, you chunk it up into, say, 128-bit blocks. If the block at the end isn't the same size or is too small, then you just pad it out. And there's three symmetric encryption algorithms that are supported in.NET. So there's AES, DES and triple DES. So we're mostly going to focus on AES, because that's the one that's recommended to use these days. But the reason I've put DES and triple DES on there is, again, if you're working with legacy systems that use DES to encrypt data, if you need to interact with those systems, you'll then need to use DES to encrypt that data. So AES is what we're going to look at, and it was invented by two mathematicians, Joan Daimon and Vincent Reiman, and they created what was called the Rindale Cypher. And then in 2001, the National Institute of Standards and Technology adopted the Rindale Cypher as the AES advanced encryption standard. And the way AES works is quite simple. So you pass into it your plain text, so a byte array of the data you want to encrypt. You also pass in a byte array of something which is called an initialisation vector. And what that is, it's a small byte array of data which is used to help jumpstart the AES encryption algorithm. The initialisation vector doesn't have to be kept secret. It's all with your message. The secrecy isn't based on the initialisation vector. And then you also pass in a key. So AES supports 128, 192 and 256-bit keys. So I always recommend you just go straight for 256-bit keys, which is 32 bytes. So you pass all those into the AES algorithm, and then you get your ciphertext back out at the other end. And then to decrypt that data, instead of passing in the plain text, you pass in the encrypted data, the same initialisation vector and the same key, and then it decrypts your data. So in.NET there's two implementations of AES that you can use. There's one called AES managed, and there's also one called AES cryptoservice provider. So AES managed is natively written in.NET. So it's a CLR-based object. And it works fine. I've used it several times. But the main drawback is it's not what's called FIPT197-2 certified. And if you're only encrypting and decrypting data between.NET systems, that might not necessarily be a problem. But if you're working with a lot of other systems that are written in Java, Node, or any sort of mainframe systems, using implementations that are FIPT certified means that you're guaranteed any data you can encrypt in, say,.NET, you can then go and decrypt on a mainframe. So the AES cryptoservice provider object in.NET is FIPT197-2 certified. And it's not written in.NET. It uses the underlying Windows crypto platform. So it's quite straightforward to use. So we have a method here called data to encrypt. We pass in a byte array of our key. So that's a first-to-byte byte array. And we pass in an initialisation vector, which is 16 bytes. And then we create an instance of the AES cryptoservice provider class. We pass in the key and initialisation vector. And we then create a memory in a crypto stream, because it's all stream-based. And then you just write the data into the stream and flush it. And then that gives you your encrypted data back at the other end of the byte array. So decrypting data is very similar. So you pass in the key and initialisation vector. Create the crypto service provider object. Pass in the key in the IV. And then you create your memory stream and crypto stream. And you've got a thing here called the AES decryptor. And then that gives you that decrypt your data back into a byte array. So the next one to look at is asymmetric encryption. So what we've talked about has been symmetric so far. You use the same key to encrypt and to decrypt. So the next one is asymmetric encryption. And you've probably heard this commonly referred to as public and private key cryptography. So the idea is you have some data you want to encrypt. You encrypt it with your recipient's public key. You then send them that data. And then to decrypt it, they use their private key. Now they're the only person that will have their private key. So they have to look after it. But their public key, anyone can have it. You can post it on your website. You can hand it out. It doesn't matter. So we're going to use an algorithm called RSA. And it was developed by a company called RSA Data Security Incorporated by a three guys with S Shamir and Edelman. And the way RSA works is it's more of a mathematical process. Whereas AES is algorithmic, it works on blocks of data. And it's very algorithmic in how it works. RSA is more mathematical and it uses modulus arithmetic. And the way it works is that there should be no efficient way to factor very large prime numbers. So if we have a key which is 2048 bits, which is the current recommended minimum key length, that key length, that key is basically one massive prime number. The one drawback of RSA is because it's a mathematical scheme, the larger the key size, you use the slower RSA is, and it is quite slow. So I was just saying, the keys are based on prime number factorisation. So if you have two prime numbers, 23 and 17, if I say to you multiply them together, it's quite easy to do, you can do it in your head on a calculator, it's very straightforward. If I say what two prime numbers do you need to multiply together to make 5963? Does anyone know the answer to that? I'm pretty sure someone's going to be out of say it one day. And maybe look really stupid. So it's a lot harder to do. So it's 67 times 89 is 5963. So the public key is 5963, that's the number that everyone else can know. But the private key is those two prime number factors, 67 and 89, and that's the bit you want to keep secret. So there's a lot more to how RSA keys work than that, but fundamentally it's all based around the complexity of factorising prime numbers. So that all sounds quite complicated, but to use it really isn't that hard. So first of all, we want to generate some keys. So we have a method here called assign new key. And we create an instance of the RSA crypto service provider class and we pass in the key strength that we want to use. So we're going to use 2048 bits in this example. And then to export our public key, we just call export parameters whilst passing in false. And then to generate our private key, we just call export parameters whilst passing in true. So in the code there, I mean we're just storing the keys in memory. Unfortunately we haven't got time to talk about effective key management strategies. But typically, you know, you don't just want to write these out to files and keep them on your server because that's not very safe. You probably want to use certificates or hardware security modules, which are network appliances that go into your data centre which are designed for storing keys. But for purposes of the example, we just can store the keys in memory. So to encrypt some data, we have our method here and we pass in our byte array of our data we want to encrypt. And we create an instance of RSA crypto service provider again whilst passing in the strength of the key we want. Then we call import parameters and we pass in our public key. And then we just call RSA dot encrypt. And then that encrypts the data and gives us a byte array of our encrypted data back. To decrypt the data, very similar, create an instance of RSA crypto service provider, import our private key and then just call RSA dot decrypt. Then that gives us our decrypted data back. One particular problem with RSA is you can only encrypt data up to the size of the key. So if you've got a 2048 bit key, you can only encrypt a maximum of 2048 bits of data. So you could have your data that you want to encrypt and then split it up into chunks and encrypt each of those different bits, bits of data. But generally you're limited in how much you can encrypt at once with RSA. That's not necessarily a problem which we'll come on to later. So the final primitive we're going to look at is digital signatures. So a digital signature consists of three different algorithms that we're going to use. So we have a key generator which we've just seen. We have a signing algorithm so we're going to sign a piece of data. And then we have a signature verifier. So if we have a digital signature of a piece of data, say a PDF document, and we then want to verify that that signature is valid, we use a signature verifier. And the key generator is going to be based on RSA as we've just seen. And a way to sign an algorithm works is that we sign our data using the private key. So if you look back to when we did RSA, we encrypted the data with the recipient's public key. When we create a digital signature of data, we actually can use our own private key to create that digital signature. Then when the recipient wants to verify that the signature is valid, they use my public key. So, typically, the way a digital signature works is you don't create a digital signature of the actual data itself. So if you're trying to create a signature of, say, a large PDF document, you create a SHA256 hash of that document first and then you do the digital signature of that hash. Because the digital signatures use RSA under the covers, it has the same limitations in the amount of data you can create a signature for in one go. So, typically, you create a hash of your data and then you create a digital signature of that hash. So if we look at my expert piece of artwork to demonstrate this, so we have a guy called Bob and he wants to create a digital signature, so he does that using his private key. He then sends out a digital signature over the internet or the intergalactic spiders web, as my picture shows. He sends that to Alice and then she wants to verify that that signature is valid, so she uses Bob's public key with the signature verify. If it was indeed Bob that sent that signature, then it would be valid. So, early on, we talked about the concept of non-repudiation, about being able to prove that someone has sent something. The reason we know it as Bob that sent this digital signature is because he used his private key, so only Bob knows his private key. So if we can verify that the signature is valid when it's sent to us, it can only have come from Bob unless his private key has been stolen. So to use digital signatures, again, we need to generate a key pair, so the same code is what we used before. We export our public and private key. Then to sign some data, we pass in a byte array, which is the hash of the data we want to sign, so PDF document, create a hash of that data, pass it into this method. We then import our private key and we then create an instance of a class called RSAPKCS1SignatureFormatter. I don't know who comes up with these names, but again, it's very easy to overlook it in the document framework. So we set a hashing algorithm on that, so under the covers we're going to use SHA256, and then you just call createSignature and pass in the hash of the data you want to create the signature on. Then you get a byte array returned, which is your digital signature. To verify the digital signature is valid, we have a method here to do that, so we pass in the hash of the data to sign and the actual byte array of the digital signature itself. We import the public key, because we're using the sender's public key to verify the signature. We create an instance of RSAPKCS1SignatureDFormatter, which rolls off a tongue on that one. Again, set the hashing algorithm to be SHA256, and then you call verifySignature, passing in the hash of the data that was signed and the actual signature itself, and that just returns a boolean, true or false. True is a valid signature, false is not. So if, for example, when we go to generate, or when we go to verify the signature, that hash that we're trying to verify, if that's been changed in any way, and you pass it into verifySignature with the digital signature itself, if that data's been changed, then verify will come back as false, because it's not a valid digital signature for that data. OK, so if we recap our four main pillars of cryptography, so first of all, we had confidentiality, and for confidentiality, we've used both AES and RSA. For integrity, we've looked at hashing, and we discussed a lot about SHA256. For authentication, we've used hash message authentication codes based around SHA256, and for non-repudiation, we've just looked at digital signatures. So now what we want to do is use a lot of these together to create what's called a hybrid encryption scheme. OK. So as we've discussed, RSA has limits on the amount of data you can encrypt in one go, and it's quite slow. But AES is very fast, it's algorithmic, it's quite efficient, but exchanging keys is very difficult. So what we want to do is combine RSA and AES to create what's called a hybrid encryption scheme. So we're using the power of both these asymmetric and symmetric encryption algorithms to encrypt data and share keys. So if we look at the example here, so we create an AES session key, and that's just a first two byte random number that we can use as our key for AES. We encrypt some data with that key, but then what we do is we use our public key and RSA to then encrypt that session key. We send that across to our recipient, they use their private key to decrypt that session key, and once they've recovered that key, they can then decrypt the message with AES. So when we send our data to the recipient, the crypt, we're sending them three pieces of information. So we've got the RSA encrypted session key, we've got the initialisation vector, which we use for jump-starting AES, and we have the actual AES encrypted data itself. So let's run through that as an example. So we've got Alice, she generates a 32-byte AES key, and she generates her 16-byte initialisation vector. She encrypts her data with AES, so using that session key, we've now encrypted our data, so that's good. We then use Bob's public key and RSA to encrypt that session key. We can then send all that data, package it all up, and then send it across to Bob. So on the other end, so Bob's received this packet of information, so he uses his private key to decrypt the AES session key. So we've recovered the key, we can now use AES with the initialisation vector to decrypt our data. And then Bob can read the message, so meet me at noon below the clock tower where a red rose in your button. I've been reading far too many spy novels. So to reiterate this, let's get Bob to send a message back to Alice. So Bob generates his own AES session key, because once he's used the other one, we're throwing it away, we're not going to reuse it, we're going to generate a new one. So he generates a new 32-byte key, he generates his own initialisation vector, which is 16 bytes, and he uses AES and that session key, an initialisation vector to encrypt his reply. He then uses Alice's public key, so we're sending a message back to Alice, so we're going to use her public key to encrypt that session key. He packages it up and emails it, or however he's going to send it back to Alice. So she then uses her private key to recover that AES session key, and then she uses that recovered key with the initialisation vector to decrypt the message reply. The message is, I will meet you, I'll be wearing a blue hat and red boots, she's very fashionable. So that's pretty good, so we've used the flexibility of RSA to be able to securely share keys between our recipient and sender, but we've also used the speed and efficiency of AES to encrypt our actual message. So we've kind of fixed two of our problems. But now let's add some integrity to that. So if Alice sends some data to Bob, Bob wants to make sure that that data hasn't been tampered with or corrupted in transit. So as before, we use a session key, which we generate for AES, we encrypt our data, we then use RSA and the public key to encrypt that session key, and then we also generate a hash message or authentication code of the encrypted message, and because we're using a hash mac, we have to pass a key into it, so we use the session key. So that means on the other side when we send the message, the only way that the recipient can check that hash message is valid is by recovering the key, so they need their private key to do that, which is where the idea of authentication comes in. They can only verify that that hash is valid if they can recover the key, the AES key, and then need their private key to do that. So that means when we send our data across to our recipient, we've got the RSA encrypted session key, we've got the AES initialisation vector, we've got the AES encrypted data, and we're also sending the hash mac of our encrypted data. So that's all pretty good, so let's take that one step further. So we've added integrity to our message, we've got effective key sharing using RSA, and we're using the flexibility and the power of AES to encrypt our message. But now we want to have the ability for our recipient to be able to prove that it was actually Alice that sent the message to him, and that's where we're going to use digital signatures. So as before, we generate our AES session key, we encrypt our data with that session key, we use RSA in the public key to encrypt that session key, we then create a hash mac of the encrypted message, which we've already encrypted with AES, using the session key as the hash mac key, and then we create a digital signature of that hash mac, so we've created the hash mac already, we then create a digital signature using the recipient's own private key. This means when we send the data across to the recipient, we have the RSA encrypted session key with our initialisation vector, we've got our AES encrypted data, we've got the hash mac of the data, so that's how we're checking integrity on the other end by checking the hash, and we've also created a digital signature of that hash, so when Alice sends a message to Bob, Bob can be sure that it was Alice that sent the message and not some other third party. So we've covered quite a lot in a short space of time there. So we've covered random numbers, hashing and hash macs, secure password storage, AES encryption, RSA encryption, digital signatures and hybrid cryptography. So I'm sure you're all going to remember that by five o'clock this afternoon. So what next? So what we've talked about today, we've covered a lot in an hour, so really treat this talk as the art of the possible, what can we do with the stuff that's in.NET? So if you are interested in using this, I really do encourage you to download the book, it mirrors what we've talked about today. That book does come with a lot of sample code, so all the code snippets I've showed you on the screen today, it all comes in a solution, you can just use the code, steal it and use it in your own solutions. If you've got access to PluralSight, say my practical cryptography in.NET course, covers what we've talked about in a lot more detail, it talks about the why we do a lot of this as opposed to just the how. Again, there's lots of sample code you can download with that course as well. If you don't have access to PluralSight, come see me afterwards and I've got some access cards which I can give you. But cryptography itself is a fascinating subject, when you start looking at the history of how cryptography came about, it's an absolutely fascinating subject. So if you want to read a bit more into it, then there's some books here that I highly recommend looking at. The first one is called The Code Book by Simon Singh. It's a relatively short book, it's about the size of a standard novel. That covers the history from back in the days like Mary, Queen of Scots, and the Romans right away through. So RSA and modern digital cryptographic protocols. It's quite an easy read, it's not mathematical or very complicated, it's written more like a novel, so I highly recommend that book. My personal favourite book is a book called Everyday Cryptography by a guy called Keith Martin. This book is split into two, so the first half goes into a lot of detail about the protocols and primitives that we talked about today. It actually talks about how they work under the covers. Then the second half of the book is how they're actually applied to real life, so how sort of Wi-Fi encryption works, how SSL and TLS actually works. So where we discuss hybrid cryptography, the way TLS works is very similar in how it does the key sharing handshake. It may not necessarily use RSA, but it's a very similar concept. Probably the most famous cryptography book, and the book that the NSA actually tried to ban back in the 90s unsuccessfully, is a book by Bruce Schneier called Applied Cryptography. This book doesn't cover AES, AES came after when this book was written. It's quite a hard book to read, but if you really want to get into the nitty-gritty detail of how a lot of these algorithms work and you're not scared by a bit of maths, then that book's quite good as well. So thank you very much. I'm going to be hanging around for a few minutes, plus I'll be around the conference for the rest of the day. On your way out, I'll be very grateful if you could vote on the session as well. If you press the green button, you are awesome. Thank you.
Not encrypting your data is a risky move and just relying on hope that you wont get hacked and compromised is not a strategy. As a software developer you have a duty to your employer to secure and protect their data. In this talk, you will learn how to use the .NET Framework to protect your data to satisfy confidentiality, integrity, non-repudiation, and authentication. This talk covers random number generation, hashing, authenticated hashing, and password based key derivation functions. The talk also covers both symmetric and asymmetric encryption using DES, Triple DES, AES, and RSA. You then learn how to combine these all together to product a hybrid encryption scheme which includes AES, RSA, HMACS, and Digital Signatures.
10.5446/51849 (DOI)
Okay, I think it's time to start. So welcome to the Force Awakens with F-Sharp. I don't even know how to pronounce this correctly. So I'm Ewerina Gabasova and I work as a postdoc researcher at Cambridge University in cancer research. So I deal a lot with DNA mutations, things like that, and that's an incredibly complex type of data. And when you look at it, it's really hard to understand what you are doing, and even if you fit some kind of statistical machine learning model, it's really hard to see what's actually happening there. So this is an example of metabolic networks, and you really can't see anything. I can't see anything because I'm not a biologist. So in my free time, I like to play with other data. I suspect they won't be using this title for the actual episode 8. So let's talk about Star Wars. Who likes Star Wars? Yeah, thanks for coming. You are in the right stalk right now. So some time ago, I decided, well, let's analyze Star Wars. I actually started doing this right before the premiere of episode 7, so there was a lot of hype around Star Wars. So I was thinking, so what kind of data can I actually analyze? And well, there are obviously the movies, but it's kind of hard to analyze movies, right? So the next data set that's available about Star Wars are actually the screenplays or scripts. And there are multiple different repositories online where you can go and download scripts of almost any movie you are interested in. And they publish it very quickly. So for example, the script for episode 7 was there, I think, in January. I'm not really sure what's the legal status of that, but if you want to look at it, you can. So I looked at the scripts. And this is an example of a script from episode 4, the original Star Wars. And the great thing about this is that they have a very standard type of format. So you can probably see that there is some title of a scene. And they always start with int or x, meaning interior or exterior. Then there is some description of a scene. And there is a name of people speaking and what they are saying. And because we are programmers, we know that this is actually quite easy to parse. Or if it has this type of structure, we can parse it. So I decided, well, let's look at interactions in Star Wars. And maybe I can even extract social network by looking at who speaks with whom in the scenes. So I went on and downloaded all the script files of all the 7 movies right now. I don't know if I will continue with this in the future, but I have 7 of them now. And I looked at them. And because they are published online, they are usually in HTML. And this is how it looks. And that's also pretty nice, right? There is a pre-formatted HTML block with the scene name in bold. And then there is a description and again, name in bold and in capitals. That's easy to parse. Well, usually. So when I first started doing this, I ended up with a bunch of rejects. Because I wanted to match all the different types of how a name can appear in a script. But because I'm working in F sharp, I would like to show you another way how to parse these things. I know if you have been to Scott Vlaschens talk before me, he was talking about parsers, I'm not doing anything that fancy. But when I looked at the structure of the script, it's actually quite simple. But I, well, this is just an example of the standard format of the script, how it looked. But I want to really write something that I can read. I don't want to deal with how it looks underneath. So what I used something that's called active patterns in F sharp. And there you can write something like this. You just parse a screen. You split the script into elements. And then you can just match each element here with a scene title or a name. And that's very readable. And you don't even have to know what's happening underneath. But I'll show you what's happening underneath. This is something called the active pattern in F sharp. So when you want to match something against some rejects, you don't have to put the rejects into a function that's actually doing the matching. You can just look at, you can define something like this. This is called the banana clips operators. So you can say that a piece of text is either a scene title or a name or something that I'm not interested in. And you can hide all the ugly rejects in here. And by the way, the rejects for names is very complex because in Star Wars, like characters are named anything. You don't get people named C3PO in real life. So this had to be quite complex because it had to match all sorts of dashes, slashes, everything. And I don't want to care about this. For me, rejects are something that it's right only. I don't want to look at it again. So I can hide it in this kind of thing. So what this does is it takes the string as an input and then it matches it against each of the rejects and returns this kind of thing, like scene title and text or the name text or word. And then when I do pattern matching over it, I don't have to care about what it does underneath. And this is readable. When I went back to my original implementation with all the rejects all over the place, I had no idea what was happening there. So for me, this is the way to write parsers in Fsharp or at least for very simple cases. And if you want to play with Fsharp, definitely look at active patterns because they allow you to hide the implementation details. And then you don't have to care about it in the more high level functions. So I thought, now I'm set. It wasn't the case. So I tried to run it on all my seven different script screenplays. And yeah, it was a trap. Because some of the files have completely different structure. For example, the names are not in bold, are not centered. They are at the beginning of a line followed by colon, et cetera. So I had to write another parser for this type of screenplays and it was much more work as usual. And that's a general case in anything data related. You spent 90% of your time just cleaning up the data. And now this also still has some kind of structure because all the names are in capitals at least. You would think so. For example, this one, by the way, the one made on is this scary guy. But if you look at the second letter, it's not an I. It's a lowercase l. I don't know if it came from some OCR system and it just decided that I is a lowercase l. But I, and for things like these, I actually had to put in explicit modifications because I didn't find a way how to deal with this systematically. And that's every time you have to deal with data. So I went through this and I thought, okay, now I'm all set. I have all the characters and all the scenes. It's all nice and easy from now on. Well, no. Because some characters don't even speak in Star Wars screenplays. They are mentioned there but they don't speak at all. For example, R2-D2. If you actually go into the script, they say things like these, like R2 frantically beeps something. Well, thanks. Or Chewbacca doesn't speak at all. So this is Chewbacca barks a comment and Han replies, boy, you said it, Chewy. No, you didn't. So actually, Peter Mayhew who plays Chewbacca, I think he did a very good job playing him because with this type of instruction in the script, what can you do? So I decided, well, when I extracted all the names in all the scenes, I decided, well, I actually have to include these characters because they can't be left out, right? You can't just have Star Wars social network or something without R2-D2 or Chewbacca. So I went on and actually extracted also all the mentions of all the characters in screenplays and tried to estimate how many times would they have spoken if they have spoken there. So I came up with very, very scientific equations. Well, you don't have to care about this, really. It's basically just scaling the number of times a character is mentioned by the times they actually speak. So on average, actually, every character is mentioned about two times as much as they speak explicitly in script. So I just counted how many times, let's say, Chewbacca is mentioned and waited it by how many times Han is mentioned and tried to compute how many times Chewbacca would have spoken in the script. So everything is good now, right? Well, sorry I didn't include some of the characters because there are actually so many characters that don't say a word that Ewoks didn't make it, for example. So the only ones that I included manually in this way were R2D2, Chewbacca, and VV8 from Force Awakens. And then when I look at the characters that my algorithm extracted from all the screenplays, I found out, well, actually, some of the other characters, they appear there but they don't say a word in some of the episodes. So they messed up my analysis again. And if you have seen episode 7, you probably know who I'm talking about. So now I decided, well, I should somehow check if all the characters, because there was such a mess, actually appear in the screenplays. And if there are actual characters, if it's not just some artifact of my algorithm. And I propose a theorem that there is an API for everything. And there is an API for Star Wars. You can go to swapi, StarWarsAPI.co, and you can find an API for Star Wars. So I looked at the website, and it's quite nice, quite well documented. You have all sorts of example requests and what it returns. And you can get information on characters, on starships, on planets, on vehicles, I think as well, everything. And they have a lot of wrapper libraries. So when I looked at them, there was C sharp, Python, R, Java, Go, Ruby. There was no F sharp, but well, maybe I can just use one of the other ones. But then no, no, no. Well, I'm doing this in F sharp. Let's do it properly. And then I looked at, for example, this is the C sharp one. And you can see that the code is not very difficult. Well, these are all just get and set methods and things like that. So there is not actually much going on. And this is just an example of one of the wrappers around, I think, people. And you can see it's 150 lines, and most of them don't actually do anything. So I wanted to show you how we can do something like this in F sharp. And if you have seen any of the F sharp talks, some of them mentioned type providers. So this talk is partly me saying I lost type providers. If you are doing anything with data, type providers are amazing. So I wanted to show you how I can use type providers to write something like all the wrappers that were there in all the different languages in a very small amount of code. So I'll just load the F sharp data library, which contains some of the type providers. And now this is actually just an example request to the Star Wars API. And what I get from this is some document that basically just answers the request. And what I will do is I will just, I don't have to read the documentation. I don't have to do anything. I can just take this and create a type provider. And it will be a type provider for a Star Wars person. And it will be a JSON provider. And I will give it the URL. Create this and I have basically everything I need right now. So now I can write a function to get me information about any person in Star Wars. So get person. And I will take an ID as a parameter. And now I will just call person load. And I will give it the request that I want. And I will give it the ID. I need normal brackets here. And that's it basically. Now I can have a look at the first person. I need to change this into string. And now I get the information on the first person. And again, it returns a JSON, which is not very readable. So I can just type p. and get all the information here in my intelligence, including the types of all the elements. So for example, I can look at who the first person actually is. So I will look at name. I know that it's a string. And it's Luke Skywalker. Do you know what's the, let's say, his mass? How much he weights? Did you ever want to know something like this? Well, he weights 77 kilos. A lot of very important information. But what I wanted to show you particularly is that basically in these three lines, I have everything that was there in the full big wrapper. And I don't have to care about it at all. I don't have to read any documentation, write any methods. This is all I need. So I actually went on and wrote a full wrapper around the API. And you can see it's not very imaginative code either. It's just defining the type providers and defining some of the functions. And that's it. That's everything. And on the same day that I discovered the Star Wars API, I actually just went back and sent a pull request and now there is an F sharp wrapper. Thank you. And in case you were wondering other random things about Star Wars, I can show you some more information. For example, do you know what's the most common eye color in Star Wars characters? You always wanted to know this, right? So the first most common eye color is brown. So for example, Yoda has brown eyes. Who knew? The second most color is blue. And the third most common color is yellow. So Darth Vader has yellow eyes, all the important information. And you can also see, for example, in episode four, the original Star Wars, Luke Skywalker once walks into a prison cell and Princess Leia says, aren't you a bit short for a storm trooper? So we can see, so is actually Luke Skywalker a bit shorter than average Star Wars character or not? Wow. It's very easy. I just get his height, which is 172 centimeters. And I also downloaded information on all the other characters. And it's 174 centimeters on average. So yes, Luke Skywalker is shorter than average. So she was right. So all the important information that you always wondered about, right? So well, let's move on. So if you want to look at it, it's on my GitHub and you can get to it from the Swarovie website as well. And play with it. It's a lot of fun information. So now what I had right now, when I ran all the code that I was describing, I had characters, I had the scenes, and I knew that all the characters are actually appearing in the films. Then Star Wars was bought by Disney and then I came across this analysis and they actually analyzed 2,000 screenplays and they were looking at how much women characters speak and how much male characters speak. And they were just comparing scenes and how much screen times they get. And they specifically looked at Disney. And it was sort of depressing because in almost all the Disney movies, men speak much more than women, etc., even if it's about princesses and things like that. And because I had almost the exact same data, I thought, well, maybe I can replicate this using my Star Wars dataset. And again, type providers. Because what they did in this analysis is that they went onto IMDB, extracted a list of actors playing in the individual films, and then they looked at them if they are male or female. And I can do exactly the same thing with my type providers. Because there is not only a type provider for Jason, there is a type provider for HTML as well. So IMDB, HTML type provider now. So let's just open everything again. And as you can see, I'm also giving it just the URL of some page describing a film. And this is actually from episode 7, so I will load it now. And let's... I don't even have to look at the website. Now I can do episode 7. And I know that there are some lists and some tables. So I will look at the tables. And the tables are these. I don't have to go through the website at all. I see everything here in F-sharp. And this is all the code I really need to access it. So I can look at the cast. And they have the cast in credits order verified as complete. And here I have a nice printing function to actually look at it properly. So these are all the people that play something in episode 7. And for example, did you know that Daniel Craig plays a stormtrooper? He's somewhere here. Yeah, Daniel Craig plays a stormtrooper and he's uncredited. So this gives me quite a lot of information. And again, it's in my Rappel. I don't actually have to look at the website and go through it. So with a bit more code, I put together this graph. So this is comparing episode 1 to 7 based on what's the percentage of scenes where, what's the percentage of dialogues with men and what's the percentage of dialogues with women. And actually, sorry, I put women and robots together because they are like other genders. Because otherwise, it would be even worse. So episode 7 indeed has more women speaking, but still almost 70% of the dialogue is men. Still better. The worst one is actually episode 4. But enough of this. I had right now, as I said, all the characters and their relationships and where they speak. So I decided to put together a social network by putting together characters that speak in the same scene. If they do, they are connected by a link. And I can visualize this. And because, well, what do you do if you want to visualize something nice and put it on the web? You use JavaScript. So I went for D3JS, which is an amazingly powerful library in JavaScript for visualization. You can write anything there. The downside is that you can write anything there. Because it's so powerful that you can't really learn it. You have to go online and look at various examples of what people did with it before so that you can copy it and put your own data into it. Anyway, I don't really want to use JavaScript. And as I said, I want to do everything in Fsharp here. So there is this new project called Fable. So it's using Babel. This is the current logo. And that allows you to write code in Fsharp and translate it into JavaScript. I think it's called transpiling. Is that the correct term? So it's a transpiler. And it's actually really neat. It's a very new project. I think it started about six months ago. And here on the right-hand side, you can see the code in JavaScript from an example on D3JS that I just downloaded from the web. And you can see that most of the code on the left-hand side, it's the code in Fsharp. That's actually calling the D3JS library. And you can see that it's very similar, actually. Right? Well, instead of var, you have let. But otherwise, all the function calls are very similar. Sometimes, you have to deal a bit with types, but it's not very painful. And it allows you to basically just translate your JavaScript code into Fsharp. And you can call any functions from Fsharp and have all the type safety and everything. And I just wanted to show you also the translated code. So this is how it translates the code into JavaScript. And you can see it, for example, this is calling something called the first layout of networks. And it's actually very readable. Like there are no thousands of underscores everywhere. And you can go there and see what's happening. And that makes any debugging so much easier. So I actually went on and did all the things in Fable. And if you want to play with it, there are many examples online by playing games from Fsharp, translating it into JavaScript. There are also some examples with Node.js. And it's really nice, really. So let's go to the actual social networks that I promised. So this is the social network of all the Star Wars movies put together. And because I promised JavaScript, here it is. So this is an interactive visualization of Star Wars movies. Let's make it a bit bigger. So you can see that the big black note is actually Darth Vader. And I guess you can already see some patterns emerging. Whenever you are working with data, try to visualize them because that gives you so much information. So I guess right now you can probably guess that on the left-hand side are the new prequel episodes. And you can see that the social network is a mess. You can see that there are so many nodes and it's very dense. Then in the center are the original episodes. And on the right-hand side is the episode 7. We can even look at the episodes individually. So this is the first episode of the prequels. And you can see it's quite densely connected. It has several main characters. I tried to color them. So this is, for example, Qui-Gon. This is Anakin. This is Jha-Jha. And when we compare it with the episode 4, the original Star Wars, you can probably immediately see some difference. The network is much sparser. It has quite a few, only a few major characters, and they are connected to each other. And if you compare it with Force Awakens, the social network is bigger. But there are not that many characters as in the prequels. So that already tells us something about the structure of the story and how understandable it is probably. And we go even further, we can quantitatively compare the different episodes. So the first thing that you can think of is how large networks are. So this is just a graph showing a number of characters. And these are only the characters that speak at least in two scenes. And only the characters that are explicitly named. So if someone is just a Stormtrooper, I didn't include them there. And yeah, episode 7 has the most characters. And then the original episodes have fewer, about 20 main characters. And then in episode 7, it jumps up again up to 27. So let's hope they don't continue with this trend. Otherwise, they will get to the episode 1 territory, and you probably don't want to go there. And then there are various scientific methods to compare networks. And the first one is called density, which sounds fancy like a density of a network. But what it really does, it just tries to compare the number of connections that are in the network with the number of connections that could be potentially there. So if you want an equation, this is it, just divide the number of existing connections in the network by the number of total connections that could be there. And when we look at this, actually, the episode 1 and 2 have the lowest density. Because they have a lot of characters that are only vaguely connected, and they have all these debates in the galactic senate that are not very interesting. And actually, interestingly, episode 6 has quite a high density, and that's maybe because I didn't include the Ewoks in there. And episode 7 actually has about the same density as the original episodes, and so does episode 3. And actually, if you look at IMDB, you will see that episode 3 has the highest rating out of the prequels. So maybe density has something to do with the quality of the story as well. And you can look at other measures. One of them is the clustering coefficient, which tells us how locally connected the network is and how much the characters actually speak to each other. So if you look at this green guy, he has three neighbors. And you can see if the three neighbors are all connected to each other as well or not. So for this one, there are two of them connected. For example, for this one, all his friends are connected. So that tells you how connected your networks are. And in terms of the story, this basically means if the story is following one character that just interacts with other people and the other people don't talk to each other, then that will have a very small clustering coefficient. And if the story follows a group of people that talk together and interact with each other a lot, then it will have a large clustering coefficient. And this is the equation. We don't actually have to care about it very much. I don't want to go into details. But if I plot it for all the different episodes, again, episode one has the lowest clustering coefficient because there are all these weird characters that are not really talk very much. And what's nice is that episode seven has about the same clustering coefficient as the original episodes. And actually, episode three here has quite a small clustering coefficient as well, which is interesting. It tells us something about the story. And I'm not claiming that small clustering coefficient means a bad story. I think it makes sense in something. For example, if the main hero is going through some obstacles and meeting other characters that are helping him on the way but don't really interact with each other, that can be a good story. But I think in Star Wars, it actually tells us something about maybe how the network is structured. And actually, I think in this case, it roughly correlates with the quality as well. And then we can look at local characteristics in the network. So the first one, the most basic one is degree. And that's just by looking at how many connections a character has in the network. So for example, this guy, he's very important because he has six connections in this network, a small one. But this guy is less important because he has just three connections. And that tells us something about centrality of the characters in the social network. So this just represents how many characters each one of them speaks to. And yeah, it's just basically number of links that are outgoing or incoming into a note. And I want to compare it with another measure of centrality in a network, which is called betweenness. Because there are many measures of seeing who's the most central in a network. And the degree just tells us if someone talks to a lot of other characters. But betweenness tells us how important a note is for communication in the network. So I will explain again. So if we look at this guy and these two other guys, the only way they can communicate with each other is through the green guy because they don't know each other directly. And these guys, on the other hand, if they want to talk to each other, only one half of the communication would go through the green guy because they know each other through some other person as well. So I can, for example, ask, so if Princess Leia wanted to talk to Jar Jar Binks, who would she have to go through to pass a message to him? So that tells me how important a character is within the story. Because some characters may be important just in one part of the network, but if a character speaks to a lot of different characters across the whole episode, then that means he's probably more important to the story. So again, this is the equation. And now it's a bit more complicated because for each pair of notes, we look at the number of shortest paths between them and then look at how many of these shortest paths go through the specific node. And we sum it over all different pairs of nodes in the network. And it's already quite hard to compute, right? Because you have to compute all the shortest paths between all the nodes in the network and then look at how many of them pass through the node that you are interested in. And if you are doing anything with data, there is always a package for it in R. So there is a library in R called iGraph. And if you are interested in betweenness, there is a function called betweenness. And if you are, as I was working in Fsharp, you can do something like this. Just call iProvider, RProvider.iGraph, and call the betweenness function from R. And again, you get intelligence and everything. And this makes anything like this very easy because you do all the heavy preprocessing in language that you are more comfortable with or that's a bit more safe than R. And call the algorithms that are already implemented there. And I actually do this quite a lot because R is a nice programming language, but just for data science. If you want to do any general programming, then it can get really, really painful. And in Fsharp, you get all the safety of type inference. So it really helped me when I was writing my parser, for example. And I just knew all the heavy data science in R and I can pass data to it directly from Fsharp. So let's look at the centrality. So who is the most central in episode seven, let's say? So actually the person who's having the most connections is Spoh. He's one of the resistance pilots. And that's because he talks to a lot of people across the whole network. And he talks to all the other resistance pilots as well. And there are quite a lot of them. And the most second central is Finn because he also talks to a lot of other people. And then Han, Chewbacca, and VV8. And for VV8, this is just an estimated number because I didn't find him in the screen place as I was explaining. So does this tell you something about who's actually important in the story? Sort of. But you might be missing some of the main characters in this. So this is actually a bit weanness. And now you can see that suddenly Kylo Ren and Ray jumped up a lot. And that's because they are more important to the actual, they are more important by connecting different communities within the network. So for example, Kylo Ren is very important because he is one of the few people that talked to Han and Chewbacca and Ray, but he also talked to Snoke, which puts him into the center of the network a bit more. But also Poe talked to him, which already gave him quite a boost in betweenness as well. So this tells you a bit more about who's important in the actual network. And if now you might be asking, so who's the most important across the whole Star Wars, right? So before the new episode, this is just Poe celebrating that he's the most central. So who is the most central overall across the whole Star Wars universe? Before episode seven, it was this guy because he was talking to quite a lot of people in the prequels as well as in the original trilogy, a little, although he appeared physically in only the first one of them. And also what I did in this analysis when I got this guy, it was because I just looked at the names in the screenplays. And actually they are making a very big distinction between Anakin and Darth Vader because they never appear in the same scene, so they appear like two completely different characters. And when I added the new episode and also merged Anakin and Darth Vader, still Darth Vader is ruling the galaxy, he is the most important one. Everyone else is just lying there dead. So also I made a quick sample with Neo4j. So if you want to learn something about Neo4j, I have a link at the end. You can go there and play with it because it's a very nice data set and you can just extract connections between people. And what's important here is that you actually understand what's going on because all the people have names that you recognize. And that makes any learning so much easier than if you have just some anonymous customers or something like that. You can understand what's happening. So this is for example just looking at characters that play in the same movie and how they are connected to each other and computing the degree. Yeah, degree just within a single movie. And you can see that Neo4j is actually quite readable because here I'm just looking at names of characters that appear in... Yeah, I'm looking at episode four, New Hope, and I'm looking at characters that talk to other characters that appear also in New Hope and looking at the number of scenes they spoke together with in. So this is how you would do it in Neo4j and you can play with it. And it's not just completely a toy example. I did something similar by analyzing Twitter and I actually analyzed the social network around F-Sharp, the F-Sharp Software Foundation, which is the home of open source F-Sharp. And it has this Twitter handle, F-SharpOrg. And I looked at all the other Twitter handles that are connected to it. And then I was looking for the most central people there. And actually this is the order based on degree. You can see that the first one is Don Sime, who is actually the author of F-Sharp. So if you are an alien and come to Twitter and you say, oh, so who is the most important in C-Sharp, you can do something similar. I don't know who you get. But it tells you something about the actual network and some meaning about it. And the second one, it's, I think it's the official Microsoft side of F-Sharp. The fifth one is the official side of the community of F-Sharp. And the fourth one is Thomas, who's speaking tomorrow, if I'm correct, in the morning. So if you want to see some important people in F-Sharp, Twitter says you should go see him. And these are information from November 2014. So it might have changed. But the problem with analyzing data from Twitter is that it takes an awful long time to download because there is all the rate limiting. So I didn't replicate it afterwards. But I might do that because, well, it tells you something about how people communicate with each other and what's happening. So if there are any changes, I can see it probably. And also another example, well, this is quite a famous example in network science. And it was a company in Hungary where they had two factories and some headquarters. And they were having problems because there are always these rumors spreading in the factories and they had no idea why and what's happening because the headquarters, they were just issuing orders and issuing messages. But there are always rumors and they were not true. So they actually called, well, in these olden days, they called social scientists who talked to people and said, so who do you talk to if you want to get more information about anything? And they named a few people. And this is the social network that they constructed from it. And what's interesting is that, well, you can probably see that I think the pink one is the headquarters. The other one are the factories. And the most important note is here. It's not in the headquarters. And that was because there was this guy who was doing health and safety. And he was actually traveling around the factories and talking to everyone. So he was the most important person in the company for communication. So now they knew that if they want to spread some kind of information, they have to talk to him because he will spread the correct information to the others. And what you would do now, you wouldn't get social scientists to talk to people. You would just explore the emails that are sent within your company or Slack messages, like who replies to whom, things like that. And because you have access to all this information, you can actually do that. And then you can look at these simple measures, like betweenness and degree centrality, things like that. And you immediately get some information that might be actually useful for you. So as I mentioned, you can do social network analysis. And it's actually a lot of fun. And you can look at how you communicate on Slack, who sends emails to whom. And you can also analyze supply grades. If there is a blackout in some part of your network, what parts will get affected as well. So network science is actually very important. And at the end here, I have biological networks. And that's actually the area where I work because I look at how genes interact with each other, what protein binds to what region of DNA. And then I can see what's happening, how they are interacting with each other. And this is really not just a toy example, because I was reading a paper the other day and they were actually looking at the betweenness. And I was like, ah, I know what betweenness is. I did it in Star Wars. And here they were claiming that genes that are important in cancer, that are tumor suppressors or oncogenes, that they have higher betweenness in the biological networks. And because I was analyzing the Star Wars networks before, I knew almost exactly what it means in terms of the actual network, because I got a feeling for it by looking at very fun data set. So I want to encourage you to actually play with data, because you get more of it than just some fun fights about Star Wars. And right now I went through quite a lot of things. I went through script parsing in a functional way. You saw the active patterns. I was calling R and JavaScript from F-sharp. I think that's actually the future of data science. Just call everything from everything, because there are different tools in different languages. I was showing type providers. I showed you the HTML type provider and JSON type provider. And there are many more. There is one for SQL, there is one for... I can't even think of all of them. I use the CSV one, because CSV is the format for data science. And I would really like to encourage you. If you are interested in data science, if you have seen some of the talks on R, for example, then you might be thinking, well, maybe I should go into data science or start with fun data sets. Because by analyzing them, you know what's happening there. And you get insights when you actually see some of these algorithms applied in real world. And this is, I think, a great way to learn data science. If you want to know more about F-sharp, I put some links there. I have the slides online already. And these are some of the Star Wars resources that I put together. So, yeah, you can read all the scripts, even if you are not interested in any quantitative analysis. You can play with the Star Wars API. You get all sorts of very important information. And all the information that I was showing you is actually on my GitHub. And you can even play with the social networks. And I saw some people doing actual social network data science, playing with it and trying to, for example, overlay the different episodes against each other to see who corresponds to whom in the social network in episode seven, et cetera. And I have some blog posts about this. And as I mentioned, I have a Neo4j demo. And, yeah, play with data. And this is actually the website where I put the slides. So go to eval.inghg.com slash Star Wars talk. And they are all there. And thank you. Are there any questions? So, any questions? I can't actually see you properly. I can't see anything. Well, if there are no questions, then it's time for lunch, I guess. Thanks, guys. you
Let's dive together into the the world of Star Wars! We'll use the force of F# and R to process publicly available datasets relating to the Star Wars movies to find out who's the most important character in the stories and why were the prequels so unsuccessful. On the way, you'll see why F# is a great language for data science - from preprocessing the data to visualizing them - and you'll also learn how you can use similar data processing pipelines to get interesting insights from your own data.
10.5446/51837 (DOI)
Hey, welcome! My name is René Schulte. I'm director of immersive experiences at IdentityMind, and I'm also Microsoft MVP for Windows Development. And I have a background in computer graphics and virtual reality and augmented reality and those kind of things since many years actually. I have a few open source projects, one of them is an AR toolkit port. And it was like a few weeks ago when I had the 5 year anniversary when I was porting this AR toolkit port over to Windows Phone Mango. Does anyone remember Windows Phone Mango? Yeah, you're my man. That was a good time, right? So they finally added like camera access API so I could do some computer vision with that stuff. But that's history. Now we're talking about HoloLens. And today I want to speak about HoloLens development. We have been doing since actually last year. So first I want to set some terminology straight. What is VR? What is AR? What is MR? Because it's often mixed up. Then I will tell you a bit about the HoloLens device, I have the current development kit here with me. And then I will tell you how you can develop for it. We will do a nice demo using Unity from scratch and build a nice little game or app if you will. Then at the end I want to spend some time to... Okay? Yeah, and at the end I want to spend quite some... Cool, awesome, yeah, give a good hand for the technician. So yeah, let's get started. At the end I want to talk a bit about the experience and about the things we learned while building HoloLens apps for the last couple of months or since last year actually. If you're what's about the company I work for, IdentityMind is headquartered in Seattle in the United States. But I actually work out from Dresden in Germany. IdentityMind has always been at the bleeding edge of technology. So I've been working with those big multi-touch tables, those pixel-sensed tables. We're doing Xbox One, Xbox 360 apps, Connect4Windows, we have actually Connect4Windows solutions deployed in the real world. So we're just for try-out but actually real use cases. And we were also brought by an earlier program from Microsoft for HoloLens, so we're developing for HoloLens since last year. Here's a quick video I want to show you about the company. It's just one minute. It shows some of our HoloLens stuff we're building. Let's watch it. Okay, enough of the marketing. Let's talk about some content. So what is virtual reality? Virtual reality is a fully immersive multimedia solution, which means you're fully inside a computer-generated world and you don't see the outside, the real world anymore. And virtual reality is around since many years, actually, since, think about flight simulators, so many decades, actually. But now we have consumer devices that are available for the masses. And we have on the low end, like Google CallPort, and in the high-end spectrum, we have devices like the Oculus Rift and the Huawei Dive, which are not too expensive compared to what we have like five or ten years ago. And they provide a very good quality. And with quality, I especially mean latency, because latency is a big challenge with those VR headsets. Because what you want to do is you want to render the scene very realistically, and on the other hand, you also need to provide it as a fast feedback. So when the user rotates the head, you need to quickly render a new updated frame and send it back to the device to show it to the user. So this is the latency, and we are talking about three milliseconds. So actually, anything that's taking longer than three milliseconds can make the user sick. They can throw up, they can get headaches, and other things you probably want to avoid for users. Also important is we humans don't just have eyes, right? We also have ears, we have hands, and so on. So spatial sound is something that those devices already implement, but also those data gloves, which have those little motors inside where you can give haptic feedback. And there are even experiments with virtual smelling, which could be fun, but could be also very awkward. I'm thinking about the VR thought app there. This will happen, I'm sure. So what is AR? What is augmented reality, and what is mixed reality on the other hand? It's not fully immersive. This is clear. So you still see the real world. You don't just see the virtual world, which is augmented with virtual objects. So this is the main difference compared to VR here. And the Microsoft HoloLens I have here is called a mixed reality device, although it's actually also an augmented reality device. It seems like everyone has its own definition of what mixed reality means, right? But I just want to tell you why Microsoft calls it mixed reality in order to diversify from the existing augmented reality solution we have these days. So you probably all have a smartphone with some kind of AR app on it, with an augmented reality app. What they do usually, they take the camera stream, they analyze the camera stream, run some computer vision on that, and then augment that with virtual objects. And you see the real world, not with your own eyes, but through another screen. You see that also monoscopic, because the camera is monoscopic. It's a cyclops basically, just one eye. So you don't see a stereoscopic, and you see it through another screen. But with the HoloLens, you see it with your own eyes. You see it mixed basically, because you have those nice see-through lenses here, where you can see the real world with your own eyes, and then the virtual objects are faded in there. So Microsoft wants to diversify, kind of, so they call it mixed reality. Challenging, it's even more challenging to have low latency, because you also need to analyze the real world. You don't just generate virtual objects, but you also need to run the computer vision basically. So even more challenging to keep a good frame rate there. And of course, you want to seamlessly merge the real world with your virtual world. And this is quite challenging if you want to have realistic rendering. Cool, so let's talk about the HoloLens. And I have the current development edition here with me. The HoloLens is a mixed reality head-mounted device from Microsoft, and it has a bunch of sensors integrated. So it has an IMU unit, an inertial measurement unit, which is basically used for the head rotation, right? And then it has a bunch of cameras like those here. Those are environmental cameras, which basically scan the room. They are used to provide a special mapping of the room. So this is nice, we will use that in a demo later, where we use the special mapping of the room, so to balance some physical objects off. It has an RGB camera, it has a depth camera, what else? A microphone array, so it has a bunch of microphones here, which are used for speech recognition. It does a very good job for speech recognition. Yeah, and everything is self-contained. The device contains everything inside here. It's actually a computer, it's not just a display, it's a computer. There's a CPU in here, a GPU, and a new co-processor, Microsoft calls the HPU, the Holographic Processing Unit, which is basically responsible for doing the special mapping, speech recognition, gesture recognition, and so on. Yeah, and another cool thing is that actually multiple people can wear HoloLens, and they can see the same holograms. So if you would all have a HoloLens, you all get one from me. I'm just kidding. So if all would wear HoloLens, we could basically see each other, right? This is also a nice diversification factor compared to VR, because with VR, just on your own world. With ARO, you still see the real world, so multiple people can wear HoloLens, they can see each other, they can still see the room, and they could also see the same holograms. So this kind of multi-lens collaboration feature is a very nice, unique point of the HoloLens. And it's a Windows 10 device, so you can even pin your Microsoft Edge browser tabs on your walls and stuff like that. Cool. So what is the input and output paradigm of the HoloLens? The HoloLens uses the so-called GGV input paradigm, which stands for Gaze Gesture Voice. So when I wear a device like this and I rotate my head, this is where I'm looking at, right? So this is where I'm gazing. I'm basically setting a gaze vector, a ray. And this ray is giving me the interaction focus. So this is where I know, okay, this is where the user is looking, this is where he's interested in. So think about it in the desktop world as a mouse move, basically. And the next thing is the R-Tab. So I can have a couple of different gestures, and one of them is the R-Tab gesture. So I hold my index finger up like this, and then I tap like that. So this is in the desktop world, the mouse click, basically. So you're gazing, setting the interaction focus, then R-Tapping to trigger the action, like a mouse click, if you will. And since this is quite limited in terms of input, voice and speech recognition is a very important natural user interface input mechanism for HoloLens as well. And it does a very good job. It's actually better than Xbox One's speech recognition. It's using Cortana inside its Windows 10, like I said. For output, you have those two see-through lenses here. Each of them has one megapixel resolution, and it's actually not just a simple flat screen or flat lens, it's actually multiple layers. The camera at the stream is split up into the channels, like red, green, and blue, and each of them is feed into each layer, basically. And spatial sound, and we will talk about spatial sound in a second later. Those tiny speakers provide a very good, nice spatial sound, which is an important addition to the visual aspect. So how can you develop HoloLens? First of all, you can just use Dijk3D directly, Dijk3D11 with C++, or even ShopDX, which is a wrapper around Dijk3D and C++. Or you can use a middleware like Unity, and Unity is not just a 2D or 3D game engine where it comes from, it's actually also used to build applications. And you have a ton of amazing components already available, like a very good global illumination system, where you can have very realistic shading, and also physics and so on. You have this nice, high-efficient workflow. You can be very productive with Unity and develop quickly using Unity. And of course, it's cross-platform. They have right now 28 platforms. They support us talk it output, which is pretty impressive. They have built-in VR support, since a few versions. So the Oculus SDK is integrated already, so you don't need to install a separate plug-in to Unity. You can just use it right out of the box. And Unity is a first-class citizen HoloLens development, that's for sure, because Microsoft puts out all the tutorials, and the Holographic Academy, it's all built using Unity, basically. And actually, a few of the applications you find in the Windows Store for HoloLens are also built by Microsoft, and they are also using Unity for a few of them. So you can build some real cool stuff. And yeah, it's still free for personal use. So I was at the Unite conference last week in Amsterdam, and they announced a different pricing model for the professional and the plus version, but they're still keeping the personal version for free, so you can use the free version for non-commercial use case, and you can download the SDK for HoloLens, the emulator, and Unity, and all of that for free and get started. It's pretty nice. Cool, enough talking. Let's build something, right? What we will do, I will start with a fresh new Unity scene, and I will configure it so it can be deployed to the HoloLens, and we will implement the things we just talked about, like gazing, gestures, spatial mapping, and speech input. And we will do this with a nice little setup of physics objects, so we will have a nice plane, we stack some cubes on top of that, and then we can shoot a sphere from the camera, which is the user's head with the HoloLens, basically, and we can shoot the spheres into the scene, and they will bounce off physically correct, basically. Let's switch to Unity. So this is Unity. You probably have seen it. I don't want to go too much into detail about Unity here. Brian had a nice session yesterday about it, but this is basically your scene view. You have the hierarchy of the scene here, and this is the game view, and this is the assets view, where you see all your files. So first thing we need to configure for HoloLens is the main camera, because if we have a fresh new Unity scene, the camera has a set position of 0, 1, and minus 10, which can cause an offset if we use it for the HoloLens. Because, like I said, if the user walks around with the device, the position of the device is mapped to the main camera in Unity. So we don't want to have an offset here, so we set it to 0, not 10, 0, 0, 0, and also the head rotation is mapped to the camera rotation. So we set this. Next thing we need to set is a solid color black as a clearing flag, because every frame we want to clear the frame, and the HoloLens uses additive blending, which means the virtual objects are blended into the screens using additive blending, which means black is transparent. You cannot see black with the device. So we clear it to black, and we set the near clipping plane to 50 centimeters, because we want to avoid that the user's eyes are crossing basically. So if the virtual objects are rendered too close to the camera, the user's eyes will start to cross, which is also uncomfortable. So we want to clip off the rendering at 50 centimeters or 80 centimeters, and we also don't need 1,000 meters, we need 10 meters. And you notice it's all in metric space, right? This is also nice with the HoloLens. All the spatial mapping is already in metric space in the real world scale. Cool. So we have our scene set up here, and now let's also enable the player settings for HoloLens. You can find them here, and we need to switch to this tab, which is Windows Store Applications or Universal Windows Platform, and we need to enable Justice Checkbox. So the Windows holographic SDK is included in our target output. Another thing we need to set is the publishing settings, and if you've done UWP apps before, you probably are familiar with those capabilities, and we need to enable some checkboxes here. Internet Client, because we want to use the emulator also for a bit, which is connecting with the hosting operating system. Also Microphone, since we want to use speech recognition, and Spatial Perception, since we want to use Spatial Mapping. Okay, so we have everything set up. Let's actually add some objects. What I usually do, I create an empty game object as container, where I can place objects that are belonging in the same physical location together and group them, and I will tell you at the end why I do this and why it makes sense. So reset the position and set it a bit further away from the camera. So we set it like 50 centimeters lower and 2 meters in front of the camera, which is usually a nice way when you saw the application that the programs are studying 2 meters in front of the users. Cool, so let's add the plane I mentioned. This is our plane, a bit too large, so let's reduce the size here, like this. We want to have a different color, so we create a material, and since we're good citizens, we create a folder for that. So let's add a folder for materials, and let's go with... Something's not right. Try again in a little bit. Okay, wait a bit Cortana, I'm good. So let's use a green field, green plane, drag and drop, that's so nice of Unity, it's so easy to build that stuff. Yeah, that was funny. A cube, so we will add a cube onto our plane, like I mentioned. Also reduce the size of this one a bit, like that. And pull it up a bit, yeah, like this. This is good, and add a rigidbody component. So physics, rigidbody component from Unity basically tells the Unity runtime that it should run rigidbody physics calculations here. So we just duplicate this one, place the other one here, and the other one on top, like this. And then we can hit the play button and see if all of physics stuff works. They should fall down on the plane once it's compiled. Yep, that works. Okay, nice. So we have our basic scene set up, let's add some actually HoloLens functionality. So we will add another folder for that, for our scripts basically, because we will write some C-Shop scripts for accessing the HoloLens APIs. So let's call it Canon Behavior, because like I said, we want to shoot spheres from the camera into the scene, so like a cannon, like a cannonball. And we can attach that script to our camera, I just drag and drop it to the camera here, and you see it's attached here. And then I double click, so it opens in Visual Studio 2015. I really like that with Unity 5, they finally added full Visual Studio support, so you can use Visual Studio for editing your scripts. And you can actually also debug your scripts, which is very nice. You just set a breakpoint in Visual Studio attached to the Unity process, and you can debug your scripts. Very convenient if you're a Windows developer. So that's nice. And you can use C-Shop scripting, like I said, and also JavaScript actually. So that's nice. So we're using C-Shop here, that's my preferred language for this kind of stuff. Okay, cool. So this is the basic Unity script. It is generated for us. We have the stop method for initialization, which is called once the script is started the first time, and the update method, which is called every frame basically. So let's remove that stuff, because I have some code snippets already prepared, which we will plug in, and then we talk about those. So let's add the gesture recognizer for the R-tapping. And I have this class here, which I instantiate a gesture recognizer, and like the name implies, this is a HoloLens-specific API, which is responsible for, well, recognizing gestures. It has a few events, like this tapped event is of course fired when the user does this tap gesture. And then we can also define what kind of gestures we're interested in. In this case, we just went to listen to R-tapping, to single tap. So that's fine for our use case here, but you could also set a few more gestures, of course. And then we call start capturing gestures, which is nice, because we can also start and stop this kind of gestures, just recognition like we want. Okay, let's add the tapped event here. This is the event handler, which is trigger once the user taps. We call the shoot method. And what the shoot method does is it creates a sphere in code. So just like we created those cubes in Unity Editor, we can also create those dynamically in code. And since we attached that script to the camera, I thought those spheres are like you're shooting out your eyeballs. That's why I'm calling the variable eyeball. So let's set the scaling a bit lower. So it's not a huge ball, but it's smaller. And we attached the rigid body component to it as well. We have rigid body physics calculations. We give it a mass, which is a bit lower. And we set the position, the initial position of the rigid body to the transform dot position. And this transform in the script is the camera transformation since we attached the script to the camera. So the transform position is basically the camera position. And then we give it an impulse into the direction the user is looking. So transform dot forward is the vector the user is looking. And we multiply it with a constant factor, which is defined up here as 300 Newton. And if you have not done Unity, you might wonder why I defined public fields here. You will see in a second once we switch back to Unity. So let's save it here. And we can switch back to Unity. Give it a few seconds to update. And there we go. And you see that field is now here. So this is nicely done without any attributes or something just surfaced here on the Unity Inspector. And they do this public fields because they also serialize it and these serialize. Cool. So this is our basic setup. Let's build it for the HoloLens. So build settings. And we select Windows Store. Since it's a Windows 10 device, we select UWP 10. We want to render a Dijk3D because we want to have 3D holograms. And then we hit the build button, select the folder, and let it create the build output. And what it does now, it basically generates a new Visual Studio Solution clause, which is the HoloLens application. Because the HoloLens application in the end is a UWP app. That's what it is with a Dijk3D rendering. So it generates it for us. And I did this before, so it should be a bit faster now. And I already opened the other solution. So once that is done, you will see this reload all here. So we can hit reload all and reload it. And then we launch it in the emulator. So you can select release. I usually just go with release mode. x86 because the HoloLens is a 32-bit processor, so we use x86. And then you can select the target output. So you can go with device if you have the device connected via USB. Or also, remove machine if you have an IP address you can deploy over the air, basically. It's very nice. And we will just go with the HoloLens emulator for this case. So just hit the button here. And I already opened the HoloLens emulator before. So it's a bit faster. And you have the nice Windows start menu here. And you can simulate gazing basically with the mouse. Just left mouse button down. And mouse move is gazing. And right mouse button down is simulating an R-tab. So that's very convenient. So our app launches now. And there we go. We have our cubes on the plane. And then I can hit the right mouse button and shoot at those guys. But you see an issue here. It's very hard to aim. It's very hard to see where I'm actually shooting. So I don't have an indication of my gazing. So I probably want to have some indication of gazing. So let's fix that. Let's add a gaze cursor. Okay. Stop it here. Switch back to the editor. Yep. Yeah. All of that. Let's add a gaze cursor. Plug it in. What I define here is another reference. We will set up from our Unity scene so we can create another game object and just plug it into our script here. And you will see that in a second. So we use another game object for showing the gazing, for showing like a cross or something you can see while basically looking at. Okay. And I will do the gaze update in the update method. So this update method, like I said, is called every frame by Unity. And what I'm doing here, I will do a raycast all, which is a built-in method by Unity. So I can shoot basically a bunch of rays into the scene from the camera position in the direction the user is looking at. And where those rays hit in virtual object, I get a raycast hit back. So I sort those by distance because I'm interested in the one object that is the closest to where I'm looking at. So then I get this first hit basically. And I use that raycast hit to position where this was hit to set the position of our gaze cursor to that place. And also I want to orient the gaze cursor nicely to the surface of the object. So I use the raycast hit normal as a forward vector for our gaze cursor. Okay. Let's save it, switch back to Unity and add our actually gaze cursor game object. So what we will do here, we will add another cube, just a simple cube. We call it a flat cube because we will make it a bit smaller. Let's go with, yeah, like this. And this. So very small flat cube. And you see, if we would just use the white color and on the white cubes we wouldn't see why we're gazing. So let's add a different material to this one. And I also want to remove the box collider because I don't want to have like physics interaction with the gazing. It's just useful visual indication. Add a new material, create material red. And there you go. Just apply it. So we have our gaze cursor here. Another thing we need to do, of course, we need to add the reference here. So I just can drag and drop the game object from here to there. So our script has a reference to this game object basically. Cool. So but before we deploy it, I want to add some more features. I also want to add spatial mapping. So right now we have out tapping, we have gazing. Now I want to have spatial mapping. And spatial mapping is quite complex to implement. There are multiple lines you would write. And this is a session on its own actually. But I made it a bit easier. And there's a nice project by Microsoft called Holo Toolkit. It's up on GitHub, open source project. And they have a bunch of very nice scripts and a few like for spatial mapping as well. So you can just use them because someone already implemented it for you. Very nice. So I created a custom package which I can just import here. Here you go. So this is a Unity package where I just extracted the stuff from the Holo Toolkit which we are interested in like the spatial mapping in Kool-Aidol. And we will talk about this in a second. So let's import those. I have a prefab here. I just plug it into my scene. And this has the two scripts we're interested in already attached. And there are two scripts from the Holo Toolkit which I'm using here. One is the spatial mapping Kool-Aidol which basically generates a spatial mapping collision mesh. So which we can use for physics interaction. So we can use the real world spatial map for physics interactions. And this has a bunch of properties. So I can say what kind of bounding volume I'm interested in. And I set it to a sphere of five meters. So everything that's five meters around me, that's what will be part of this spatial mapping mesh. And then you can also define level of detail, low, medium and high, which basically means higher precision but also more, takes more processing power of course. And then the default for time between updates is two and a half seconds which basically means this is the time the spatial mapping mesh will get updated. So if lots of people are walking here by fast, they won't be part of it. But if I place the table in here and move it out, this will be part of the updated mesh. Yeah, and there's also the spatial mapping renderer which is similar to the spatial mapping Kool-Aidol. But this is used for visualizing the spatial mapping mesh. And I can use material here. So I have a wireframe material attached to this one which is just like a wireframe. Looks like a net. And you will see it in a second how this actually looks like. And I could also say occlusion. So if I said occlusion, you won't see the actually mesh. You won't see the spatial mapping mesh. But your virtual objects will be occluded. They will be hidden by real-world objects. So this is probably something you want to do. Okay, cool. So we have gazing, jasters, spatial mapping. Let's add last thing for the session. Let's add speech recognition. And we will do this with a nice little script. So I have a C-sharp script, another one. And I just called it one speech handler. There you go. Let's remove that stuff. And plug in some code I prepared. Yeah, it doesn't... Yeah, reshorval doesn't show me what namespaces I should add. So let's switch here and copy those in. There you go. Yeah, it's Windows Speech. I always forget this one. It's Windows Speech. So the namespace is here. Okay, cool. This is the keyword recognizer. This is similar to the gesture recognizer. But of course it's used to recognize keywords. And the nice thing is you just give it in a string array, basically. So you can see I defined some string variables up here. These are just normal C-sharp strings. Just a normal C-sharp string like height plane, chute, reset scene. And I passed those into the constructor of the keyword recognizer. And if you have done speech recognition before with other platforms, you probably had to define some XML and a grammar and whatnot. And this is really what I liked. It made it very easy. You just use C-sharp strings. And there you go. And once it recognizes one of those gestures, it triggers that unphrase recognized event. And we have an event handler here. And this gets in past the arguments and the text, which is basically a recognized string. So I just do a very simple string compare. It's so easy. It's really simple. So I compare it with my height plane command. Something's not right. Try again in a little bit. It's all good, Cortana. Calm down. So height plane command. I can set the plane deactivated. For chute, I'm chuting the cannon. And for the reset scene command, I'm resetting the scene. I'm just reloading it, basically. Cool. Saving it. Always important to save the scripts and then switch back to Unity. We have our script here. I just attached it to the camera as well. So drag and drop. There you go. And then we plug in the references like the plane. And the cannon. The cannon is attached also to the camera. So I just attached this one here. And yeah, let's build it. Again, for the emulator. But we will also see it on the HoloLens later. OK. Let's build it for a second. And you notice it's taking quite some time to build. The deployment cycle is quite a bit to test some stuff. And at the end, I will share some best practices. How you can avoid that and how you can be more productive and faster inside Unity without having to wait always for the build time and the deployment cycle. Cool. Launch the emulator again with our updated stuff. And the emulator is really nice. It's included in the SDK. And you can actually also load different spatial maps. So you have this room tab here where I can basically load the spatial mapping I created before so I can take the device and save the spatial mapping of a room and then load it into the emulator. Pretty cool. Also for testing. OK, cool. So you see the gazing here? I have this little flat cube. This is now shown at the position I'm looking at. And also nicely oriented to the surface of the object. I can also use speech commands. Hide plane. You see? And you see another thing? They bounce off in the real world. If you look closely, you might be able to see a sofa here. So this is the default room that comes with the HoloLens emulator. OK, cool. So this is the emulator. Let's do something very brave. Let's switch to the device. And you should be able to see what I see now. Let's put it on. And you will get the code from the demo all in GitHub. So I put all the sample code up on GitHub. Good. Let's disable also audio. And, yeah, there we go. OK, let's open the HoloLens dot menu. I have the finished app already pinned here. So I open it, place it somewhere like this. And then it loads. And like I said, you can get all the source code from that little demo at the end of the session. It actually also has audio collision. So we have spatial sound as well. There you go. I have a different gaze cursor. You see? I made two little cubes. And I have someone else here. This is me. It's free scan hologram. Yeah, I know it's weird. Yeah. And since we are in the Scandinavian region, I thought instead of shooting spheres, we'll do something cooler. So I thought we should some meatballs like Skutboller. Is it correctly pronounced? I hope so. So let's shoot some Skutboller. This guy, I hate him. Let's shoot him away. Hide plane. See that? And you're going to all get your Skutboller for lunch. There you go. And you see them? They bounce off here in the real world, right? This is pretty amazing with the whole lens. The whole lens can do the spatial mapping, right? So we can shoot here. I can do this the whole day, basically. And it's almost lunchtime, right? So we didn't have his Skutboller yet. Yeah, OK, cool. Glad that worked out, because sometimes this doesn't work with the Wi-Fi. But I actually have my own Wi-Fi access point, so it's good. Cool. Glad the demo worked. Like I said, you can get the source code of this. Maybe not with the Skutboller texture, but with the rest, you can get all of that on GitHub later on, and you can play around with that. Cool. I also had a video prepared just in case it doesn't work. So this is the video I recorded at home. So you see, for this one, I actually used eyeball textures. And yeah, the nice thing is, you see how it adapts the spatial mapping, adapts to the environment. So those spheres, you see how it bounces off and rolls down the stairs? So this is recognized by the whole lens. Yeah, good fun. OK, cool. So shooting eyeballs in Skutboller is really, really fun, but a very small niche. Right? So let's talk about some real-world applications. And we have built a couple of apps right now. So we did an engagement with museums, so we tried out some different stuff, what you can see in the museum. So we put some dinosaur models into the office, which is very amazing, actually. Because you can see those dinosaurs at the real-world scale. And you actually can figure out, in metric space, right? So you can figure out how big they actually are. Really, really nice experience. We also did some stuff for automotive industry, for construction as well, and plane maintenance and a couple of other, actually real business use cases. I want to spend a bit of time talking about an app I was working on the last couple of months. It's called HoloFlight. And I want to show some best practices, some learnings, and stuff with you from the applications as we build it. So HoloFlight is a real-time flight data visualization, basically. And we take the real flight data and visualize it in 3D. And if you think about air traffic controllers, what they use these days is they are looking on flat 2D screens, right? But planes and flights are actually flying in 3D. They have an altitude. So we figured let's put them in the HoloLens and visualize them as holograms. So that gives you another relationship between the flights, because you see them nicely stereoscopic, so you get the relations between the different flights. And we can also visualize invisible information, like flight trails and so on. Here's a quick video showing you the app. Yeah, let's turn down the volume a bit. So what is that? OK, that's a bit slow. What is that? Let's restart the video. OK, that's better, I think. Cool, so you can see the Hawaiian Islands here. And we have the flight space of the Hawaiian Islands basically visualized. And you can gaze at those planes that are flying. Like I said, this is real-time flight data. And you can see flight information like call sign, altitude, and whatnot. You can also hear air traffic controls, conversations, and spatial sound. And we can also visualize the airport weather information, wind speed, wind direction, all of that stuff. And we use spatial mapping of the HoloLens, so the user can basically pin those information panels on different places, like on different walls, on a table, and whatnot, and basically lay out the workspace. We also have different level of details for the terrain. The terrain is, by the way, built using Bing Maps API data. So this is real topographic data. Yeah, like I said, we have real-time flight data, but we also have our own Azure backend where we can cache the data so we can play it back at different speed basically. What you can see here, we play it back faster. And then you can see those flight trails visualized. So in flight information, you usually don't see. So that's nice. And adds another value, of course. Okay, so what are the challenges? First of all, usually flight information is visualized in 2D, but if you visualize it in 3D, you also open another dimension of arrows basically, which you don't see in 2D. Yeah, you also need to be careful with the holographic frame size, because you cannot put too much information in front of the user. Because flight information is very dense, you have a lot of information, you want to show a lot of information. But on the other hand, you need to be careful that you're not putting too much stuff there. A special mapping of the HoloLens, as you have seen, it's really amazing. It can scan basically the room, so you want to use it in some way in your app. Gazing and selecting at small objects like those tiny planes can be very hard to gaze at them and select them because they are pretty small. So we had to fix it as well. And yeah, you want to make it an awesome immersive experience. And spatial sound is also one important factor there. So how did we solve those? First of all, finding the right flight data. And we partnered with a company that provides us the flight data in a nice REST-based web API. So we get the data as JSON, easy, everyone can pause that. But then we noticed a bunch of arrows in the data, actually, which you don't see in 2D, like I mentioned. For example, we had some crazy altitude drops. And this happens just a few times, but not that often like we have seen in the data. So there were definitely some data glitches, right? And it happened so often as we have seen it, no one would fly anymore. So we had to fix that. And you can basically go with two approaches. You can do it offline or you can do it online. Since we wanted to do it real-time, we did it online. So we invented some algorithms there to fix the data, to smooth out the data, to make more sense out of the data basically and avoid those issues. Then of course, you need to visualize it in 3D. So you have geographical coordinates and you need to map that to an unwrapped planar rendering. And we also want to map the altitude. And if we would map the altitude linearly, like the 30 to 35,000 feet of flight space, we would waste a lot of rendering space with uninteresting flight information. Because the most interesting flight information is in the first like 3 to 5,000 feet close to ground, right? So what we do, we use non-linear mapping basically of the altitude. So we give the more interesting flight space also more rendering space. Then we get a bunch of positions for each of the plane in the flight space. And they can even have different timestamps or a lot of different data basically. So we need to normalize them so they make sense when we visualize them all together in one flight space rendering. And the right size is important of course, because we are visualizing those planes. And if you're stepping it further away, they can become very small. So you just see a few pixels actually. So what we do there, we use level of detail to swap out the plane model with just a cube. And you don't notice this because you just see a few pixels anyway. And we also make sure if the user steps even further away, the cube will stay at the same size. You can still see a few pixels there in the back. And you know there's something going on there. Yeah, UI, we had a couple of iterations for the UI to make sure that we are like I said, not polluting the holographic frame, not showing too much information there. And this was the first iteration. That's my fault. It's developer UI, so super ugly. And the next one was in place billboards. So we had those little billboards which were shown directly at the plane position. So you could pin and enable them, multiple billboards basically. And you see the issue in the screenshot. You have those overdrawing, you have those overlay issues. Also not very nice. Then we figure out let's use a central, let's use a curved screen UI, which is nice because the human head is also a bit curved. So you have all those curved TVs, right? So we figure let's use a curved piece. But this thing then kept on growing and growing. We wanted to show more information like flight information and weather information and whatnot. So it was also not fitting the holographic frame anymore. So then we split it up and we polished it to what you have seen in the video. And we have those independent panels basically. And the user can pin those separately in the room basically and lay out this workspace. And we're using the spatial mapping of the HoloLens to allow the user to place those. Cool. Yeah, size matters also for ray casting. Like I said, those planes are very small. And if you want to gaze, if you want to select them, the gaze ray will often hit nothing basically. So we figured let's do something else and we use basically just a simple sphere collider. So we use a sphere collision volume, which is a bit larger. And you can also think about to dynamically adapt it. So if the user is stepping further away, you can grow it a bit to a certain amount. And if the user is getting close, you can make it smaller and then actually switch to the real mesh collider of the plane. The mesh at the circle is also a nice performance gain because testing a ray with a sphere is really cheap. It's really easy and doesn't cost a lot of performance. Yeah, spatial sign. There's some crazy experiments out there. You can see on the slide, really, I'm sure it has a super amazing spatial sound. But it might also be a bit heavy on the head. So I'd rather prefer this one. And the HoloLens has those tiny speakers up here. You have those small speakers here. And of course, they don't provide a very bass sound, not very low frequency. But what it does with those tiny speakers is super impressive. It's really good. And yeah, it's perfect brain trickery, actually, because you know where the sound is coming from. And in HoloFlight, we actually use it for this. Beaver at 182 heavy, right mic alpha, right Bravo, then Papa. So we use that to play back air traffic control conversation. And those are not just adding value from the ATC conversation itself, but also from the spatialness, from the spatial sound. Because we're playing back that sound in 3D at the plane position. So even if you're not seeing the hologram, like rotating your head, you don't see the hologram, which is somewhere here. But you hear the spatial sound and your brain knows where to turn. You know where the sound is coming from, right? And this is what they have really done very nicely with the HoloLens. They have a very good algorithm there to compute those spatial sound. Really good. And if you have a use case in your applications, make sure to use spatial sound. It's really adding the topping on the cake, if you will. Cool. Some photo best practices. So use fading and transitions is an important one, because in the real world objects don't just appear or disappear. Maybe ghost, if you believe in such things, but I don't. And virtual objects, you can just enable or disable them, right? You can just make them appear or disappear. But of course you want your virtual objects to behave like real world objects, so you need to fade them in, move them in, grow in size, shrink in size, and so on. And I have three short clips here where I want to visualize you the differences of all those three approaches, what you can do basically. And so we have all the Hawaiian islands here, and then we want to switch to just one island, right? So we see all islands just switching to one island without any transition, so this is not very nice, very awkward. Then you could do a cross fading, like an alpha blending between the terrains. So you can do this, which is a bit nicer, it's a bit smoother. But on the other hand, you're losing the context. So as a user, you don't know where you're looking at, you don't know which island you're actually zooming into, right? You see all islands and you just see one island, but which one? So what we did in the app is basically scaling up the mesh of the all islands, sorry, we're scaling up the texture of the all island mesh, and then do a cross fading to adjust the one island. So this is a bit nicer, and it's a nicer transition for the user to actually know where to look at them. Cool, so let's talk about my top 10 HoloLens development recommendations. First of all, you can be a really good developer, you can be a great 3D developer, best one in the world. But if you don't have any 3D content to show, well, you don't have anything to render, right? And at IdentityMind, we're lucky to have a bunch of very talented designers and also 3D artists who can make very nice and good 3D models, and also with a reasonable triangle count. And it's reasonable because HoloLens is a mobile device, right? HoloLens is a mobile device, and compared to what's reality headsets like Oculus Rift and HTC Vive, with those, they are just displays basically, with a bunch of sensors. But the computing is all done on a computer, in a full blown desktop PC, right? For the Oculus Rift, you need a really high-end PC, which of course can compute amazing scenes, can compute really nice renderings, but you're always connected with a cable, you're basically always on the leash. So with the HoloLens, I much prefer the HoloLens approach because it's self-contained, everything. This is a computer, right? It has everything inside here, and they don't need an extra cable. I can just put it in and walk freely around and can interact with other people as well. So this is much nicer, but on the other hand, you have limited computing power, of course, because you cannot put in your full blown desktop graphics card in that device. We'll probably get very hot if you would do that, and this is something you want to avoid as well. So, limited computing power, and what we notice is that the HoloLens is mostly full-rate bound, basically. So you can render tens of thousands of triangles, like 50,000, 60,000, 70,000, it's not an issue to render those. But if those are rendered closely to the eye, if these are rendered closely to camera and take up a lot of pixels, and you have a heavy pixel shadow running for each pixel, then your performance will drop. You get really quickly running into issues there. So you want to draw your pixels very easy, and for example, don't use the Unity standard shadow, because this is too heavy, it's doing too much. And what we notice is that you can actually get away mostly with Vortex lighting, so you can just use a simple lighting model for Vortex-based lighting and have a super cheap pixel shadow. And another thing you want to avoid is like over-drawing. This is just with every mobile device, basically. You don't want to have like, mashes, models, multiple, one after another, which would cause like, multiple pixels drawn multiple times. You want to avoid those over-drawing issues, also large transparent objects are issues. So you need to be careful there, because you want to render with 60 frames per second. This is really important to render your holograms of 60 frames per second, otherwise if they're dropping to 30 frames, they can become unstable. So you have seen they're pretty stable in the room, right? Even if I told my head, they will be at the same position. But if the frame rate drops, they can be becoming unstable and drift. This can make users sick. Actually, Microsoft did some user research there, and some people really get headaches, and they can throw up. So you really want to go with 60 frames and optimize everything. Still, the HoloLens has a multiple CPU, and in HoloFlight, we use that for the data fetching, the data clearing, and you know, all the processing is done in the background thread, basically. So we can keep the UI thread free from that work, and UI thread can keep up with the rendering loop. Yeah, if you have never done a 3D programming, or it's been a while, you probably want to brush up some of your math skills, as you have seen there is a bunch of stuff going on, like with vector algebra and so on. You don't have to implement all yourself. Unity or whatever game engine you're using is helping you a lot. They have everything built in, but of course you need to be familiar with what that means. What is a transformation? What is a metrics calculation? So this is important. Anchor your holograms. So the HoloLens has an API which is called World Anchor. And if you remember the demo, I grouped those cubes and the plane in one container, right, in the scene hierarchy. And what I could do, I could apply World Anchor to that container, and then the HoloLens runtime would basically give that World Anchor its own coordinate system, right? It has its own coordinate system, which then results that the HoloLens makes those World Anchors very stable. So even if I leave the room and come back into the room, the holograms will still be at the same position. This is done with the World Anchor basically. And the coolest part about it is you can actually persist those. So you can save them in a global storage in HoloLens. You can save the World Anchor with an ID. And once you reload your app or restart the device and then load the World Anchor position, they will be at the same location. And when I'm flying back home, hopefully tomorrow, I don't know, I heard some really interesting things about a strike or something. Well, and if I'm flying home, hopefully I will see some holograms in my office at the same position when I left them. So this is done with the persistence, right? So you can persist those World Anchors. Pretty cool. Yeah, leverage level of detail. Like you have seen, you can save some rendering time there as well. You don't need to show the high detailed model when it's just like meters away. There, the gaze cursor is important. As you have seen, the main input paradigm of the HoloLens is gazing and gestures. So you want to make sure that your gaze cursor is very stable and very nicely done. Because the user will see that most of the time, basically. And one part about it is to smooth it, basically. Because the gazing is based on head rotation. This is based on the IMU unit, which means this is based on sensor data. Like every sensor in the world, this contains noise if you use the raw data, which makes the gaze cursor jitter always slightly move. And this is something you want to avoid, right? You don't want the user to always have jittering thing in front of it. So you would want to smooth it, basically. The HoloToolkit I mentioned, it has a bunch of reusable scripts and also a smoothing algorithm implemented. So you can use that one. We actually developed our own, because it gave us a bit better results with less lagging. It's a bit faster to react. Also, the one we implemented has some prediction. So it is giving us better result. Another thing about the gaze cursor is the hand ready state. So the user is air tapping, right? And the cameras of the HoloLens, of course, need to see the hand. So if the user is air tapping here, it won't work, because it doesn't see the hand, right? It needs to see the hand somewhere here, or there, also works, but not there. So you want to give that feedback to the user that it's now basically seeing the hand and it can interact with the piece the user is gazing at. And what most apps do, and also the HoloLens start menu, they show an open ring gaze cursor when the hand is in view. So if you tell the user, okay, you can now interact, and they show a flat circle, something like this, when the hand is not in view, right? To give the user the information, okay? Open the cursor, you can interact. The other cursor, nope. Those details really matter. They make out a good experience, and you probably want to have a good experience. So pay attention to those details. Yeah, use animations and transformations to let your virtual objects behave like real-world objects. And yeah, so this is also a nice one. So if you have a bunch of Unity projects, you probably also have a few reusable scripts. The naive approach would be to copy those script files between all the different projects. But this is not nice, because when you have a bug or something, or want to change something, you have to copy all the stuff. So what I did instead, I created a central C-sharp solution, where I have all the reusable scripts in one central place, basically. And I can just then build out a DLL and copy that into a special folder into your Unity project. So the Unity project has that Assets folder, right? And there's a special folder you can create. It's called Plugins, where you can put in DLLs. And then you can just use those scripts as well from the DLL. And this works out quite nice, because I also have a post-built step in my C-sharp solution, so I just hit build, and all my projects are updated with the latest stuff. There's one thing, one got job. The Unity editor uses MonoRuntime, which is actually a five or six-year-old version of the MonoRuntime. So it's quite outdated. I think this is being fixed now since Microsoft acquired Xamarin. So I was at the Unite conference last week, and they basically announced that on the roadmap that it's planned to update the MonoRuntime they're using inside Unity, which is great because you don't have TPL of anything in the Unity editor, right? You don't have TaskPelayLibrary. So, but on the whole lens, it's running Universal Windows Platform. So it's running UWP.NET Core, the latest one. So what I have to do is basically have two C-sharp projects. One I'm building for the MonoRuntime, the other one I'm building for UWP. And I'm just sharing the scripts, and sometimes you have some pre-compiler directives, like if Unity on the score editor, blah, blah, blah, do this, right? Those kind of things. But anyway, having this central C-sharp solution with all the scripts in one place really is a huge benefit for maintenance. Yeah, avoid the long deployment cycle. As you have seen, it's taking quite some time to deploy something to the HoloLens. So first of all, you need to change your Unity scene, then build the Visual Studio output, then build it again, and then deploy it to the emulator or the HoloLens. So this is really, if I want to test some small stuff. And what I did here, I wrote a custom script where I can simulate the gazing and gestures already inside Unity. So I can do the same like the emulator does. I can use my mouse for gazing. I can use the mouse click, right mouse click for out tapping. I can also use different gestures, but I can do this already inside Unity. And for every little change, I don't have to deploy it to the device or the emulator. It's a huge, huge time saver. So HoloLens also has a bunch of other gestures I didn't mention, like the out tap I said, then it has double tapping and tap and hold. So you can do like scrolling. Nice thing is three-dimensional, right? Not just this, but also like this. So those gestures I can also simulate already inside Unity with the custom script I wrote there. Yeah, so you can stay productive most of the time inside Unity. But of course, you also want to test it on a device if you're lucky enough to have one. Because nothing comes as close to this device as the real device itself, you know, the performance and spatial mapping and so on. But anyway, the emulator is really good. As you have seen and support speech recognition, I can load spatial mappings. I can basically test most of the stuff already with the emulator. Cool. So I think next chapter of computing is really happening right now. It's a great time to be a developer. It's really good to be working in that space again. Yeah, and HoloLens is one of its kind, I think. You have a bunch of virtual reality devices. And HoloLens is often compared to those, but it's actually, of course, it's not a VR device as you have learned. It's actually a mixed reality device. So totally different. And it can also do things which are just not the difference in, you know, showing the rendering of the pieces, like augmenting the real world. But actually, it can do the spatial mapping, for example. You can have the collaboration features and so much different things. And I'm pretty sure the HoloLens, it will change how we interact with computers. It really has huge potential. And you can even run 2D apps on it. So you can develop the universal Windows platform applications. You can also run in HoloLens, but they will run in a window, basically. So you can have a 2D window, and you can pin it here or there, whatever. It's also nice, but of course, with a stereoscopic 3D rendering, a 3D is king, basically. So you want to have 3D content. And you can do this with going straight with Dijk3D11, which we also do, by the way. Or you can use Unity, right? And Unity is a nice tool. You can get quickly started, be very productive. And there's a use case for every piece, right? For example, I wouldn't build Skype with Unity, right? I would build it straight with Dijk3D and C++, that's for sure, right? But for really nice proof of concept, Unity is great. And not just for that, like I said, a bunch of HoloLens applications in the store, built by Microsoft, or actually built using Unity. Yeah, you can get a few links here. So HoloLens SDK, you can download it for free. It has the emulator inside, and also the special Unity build that is supporting HoloLens development. HoloToolkit for Unity is also available on GitHub. It contains a bunch of scripts, prefabs, shaders. So they actually have some nice optimized shaders you can use, like for vertex lighting and so on. Pretty good stuff. So you really want to grab a copy of this one once you install the SDK. And yeah, on my blog, you can find the top-ten development recommendations longer write-up with more details. And also the slides and the demo code, I will put the link for the demo code on my blog as well. And the demo code is actually on GitHub. Cool. So we just have a few seconds left, and I don't want to overrun, because there was lunch, I think. We're all hungry. There's a shot ball and what's just holograms. So they weren't real. Hopefully they have some real ones. I need to try them. Anyway, so you can shoot me an email or tweet to me, or I will stick around here a bit and just ask me there if you want to ask some questions. And with that, I thank you for your attention. Thank you.
With the fast developments of powerful Augmented, Mixed and Virtual reality devices like HoloLens science fiction movie technology is becoming reality for consumers. In this session, Rene Schulte will talk to you about the challenges that AR and VR pose and why 3D is an essential part of this experience. Comprehensible demos will show how every developer can develop outstanding HoloLens solutions with Unity and be part of this computing revolution. When you leave this session, you will understand how to setup your Unity environment, what skills you need to create compelling HoloLens applications and what best practices will help you move forward quickly. Starting with a simple “MR Hello World” demo, we will use this to understand all of the pieces required to run your app. Last, but not least, we will demonstrate some of the applications that Rene and his team have been working on in the past few months, to give the viewers a sense of what can be accomplished with the right skill targeting one of the most anticipated devices in a while. When you leave this session, you will know the challenges we faced while building HoloLens apps and how we solved them. You will also have learned best practices and recommendations to avoid pit falls and you will hopefully be inspired to build your own HoloLens apps.
10.5446/51860 (DOI)
All right, perfect. So awesome to have you here. Thank you, thank you. We're going to talk about what every Node.js developer needs to know about Elixir today. There's me on Twitter. I respond on Twitter, so hit me there. I love carrying this conversation on out of here, and if you have any questions, always follow up with me there. There's no expiration date on that. I want to start off. This illustration appeared in the Guardian shortly after December 25, 1914, and this is an illustration of the Christmas truce, where the German and British soldiers that were a trench warfare, with this excuse of coming out and burying their dead, they got together and they were civil to each other. In the middle of this horror that was going on, they stepped out and they acknowledged each other's humanity. And in technology, tech, we often break off into camps, and we don't do that so well. A lot of times we don't reach out. And so for the Node folks here, I really appreciate you all coming to a functional programming talk. And for the functional program people, I'm really glad you're here to think about this Node side of things. At the point that we make problems visible, then you can do something about it. You can improve the situation. But until you make them visible, it doesn't really happen. And so, thanks. So this idea of civility, we might hate a technology. There might be problems with a technology. We might know that a technology is dangerous. But you have to understand how people got there and have empathy for them in both directions. So my path, this sort of long road, you can tell by my gray hair, that I began writing software professionally back in 1994. And so I was writing C and VB, so the old VB, not VB.net, but kicking it old school. And then Pascal, VB script, J script, JavaScript, and then C sharp on. And through that path, I ran into lots and lots of problems. I created lots of problems, like we all do. But I probably created more than my share of problems. And through that, I might not have been the best developer, but I was awfully good at seeing my mistakes and trying to learn from them and seeing what the pitfalls were that led me there. And through that, I worked to try to help other people avoid pitfalls. And that empathy that I tried to use in this land of imperative code and OO code during the 90s and the aughts, interestingly, it had this reputation for being able to see these problems and help. And that led me then to Norway in 2007. So I was here, was brought over for a large architectural risk assessment project. So here in Oslo, there was a company that had a huge code base. They had 20 million lines of managed C++ and C sharp code. They had these 12 architects. And we worked on this risk assessment of how is the software going to destroy the company project over the six week period. And through that, all of the problems that I imagined would be in the code base, those were there. So you could almost get Robert Martin's book and just hand that book over before you even went in and looked at the code because those problems are common across all imperative and OO systems. Those code smells are there. And so that was expected, that side of it. Amazing team, but you expected the problems. One thing I didn't expect, though, is there was a system in that shop that had been up and running for four years without a millisecond of downtime. All the other applications, they were crashing maybe multiple times a day. It was embarrassing. It was causing trouble. You had dependencies within the code that made it hard to upgrade. You had the mingling of state data and behavior, like we're told to do by Martin Fowler. We had that all through the code and that caused all these defects in the software that made it was really hard to solve problems. And we had event storms because you had concurrency involved in this. Concurrency's hard to do if you have mutable state. Problem, problem, problem. But this system, four years without a millisecond of downtime, that was unusual. And so I asked more and more about it. And I actually learned it was even more remarkable. There had been two years previously, there had been a major code upgrade, this compiled language. They compiled and they'd pushed new updates to this running cluster without a millisecond of downtime then. So I was like, what is the spooky magic? And so I learned for the first time the word earlying. I'd never heard of it. I was off imperative OO guy. And I was happy there, sort of happy. I knew I had problems and I knew there were problems, but I was comfortable there. And so I dug in and I started learning more about earlying and my code changed. And I have built systems that people tell stories about now that they've seen these good things. I haven't done that before, before I ran into the earlying VM. And so it changed my life. And so I'd like to come back to Norway and return the favor. So I'm very thankful for this city and for what it's done for me and all the defects it's kept me out of. So here we have the Hebrew's journey. This is sort of the arc that all novels and movies and everything follow, this arc of the hero's journey. And so that was the beginning of my hero's journey. And hopefully, this will be the beginning of your hero's journey as you go through and you pick up some of this knowledge. We'll go through this ordinary world and we'll progress. So for the node folks here, I imagine you came to node for some of these reasons. It's really a welcoming community. You can come in without knowing much about programming at all and you're welcome to in. And that's awesome. And it's easy to get started in node. And there's no doubt about that. It's easy to get started. And you can be working in Angular, Durandal, Aurelia, one of these frameworks. And if you need to have a back end, you need to stand that back up in quickly, well, you can do that in node. It's very easy to kick that up. Everyone sort of knows JavaScript. No one really knows JavaScript. Everyone sort of knows it. Then people will also come because they've been hurt in other stacks. There's problems with the tools being too heavy or not being able to have enough control over things, just various problems. And there is a concurrency story on node. It's different than I think a lot of people think it is. But there is a story there and people know that concurrency is important. So that brings people. And then there's good support in Visual Studio and on Azure. I think of this sort of progression in node. And I have lots of friends that went through node. I started working in node when it was brand shiny new because I was interested in the story of it. I pretty quickly then dropped it because I had already learned Erling at that point. And I saw it's like this isn't really filling the bargain that I thought it should be. And so, but this is this progression. We start off with folks that are junior. And they're full of optimism. You know, they haven't even heard the horror stories yet. They haven't heard any of the bad. It's just this welcoming community. They come in and they're excited about this. And that is a beautiful thing to have. And at some point you move on and you then have your sort of engineers. And they have heard the problems. Maybe they've seen a few problems. But the thing is, they can say, I'll be a craftsman. I see these problems. They're documented. They don't know how to actually be diligent and not run into these problems myself. And that's that tier. This is the second stage of node development. And then you move on to the seniors and the leads. And I have friends that are in this spot right now. They bought into this. They brought their companies along. And they've heard the problems. They've seen the problems. And now they're like, oh my God, what have I done? You know, they've got themselves in a trap because they used it for everything. They used it for things far, far outside of its sweet spot. And they're in a terrible place. And they're having trouble shipping code. That's rough. That's a rough spot to be in. And those folks, I'm excited about where they're at. I'm excited about all of these groups coming in because it's a great feeder into Elixir, I believe. But the folks that are senior, they're looking for escape pods. And a lot of them are joining up at the Elixir user groups around the country, so around the world. So you'll have this mix of Ruby people. And you'll have this mix of node people. And then a few scattered FP people showing up at the Elixir meetups. And so I'm really tickled about this linkage and this connection. When on node, when people talk about node, they often use the word scalable. And it's sort of a problem, I believe, to use the word scalable in the context of node. Well, scalable is a rough word. It's sort of a buzzword, marketing word. It's a word that managers like to use even though they don't know what it means. And I feel like you could say, okay, you've got a fast car. Describing a car is a fast car. You've got a car that goes and flies through the guardrails and falls down the bluff at terminal velocity. And you could say that that's a fast car, but you're missing an important part of the story by calling that a fast car. There's other things going on. And so when you call node scalable, there's an issue in that it can handle the 20,000 connections that come in. It can handle the 20,000 users as long as you immediately pass that off for something else to do the work. Because if you do the work, you then have got this issue of where you're blocking. You're blocking your other 19,999 users. So the issue then is like, okay, we hand off the work, or who's going to do the work then and what do we write that in? And there are also solutions. Every time you run into a problem, there's a lot of ingenuity in the node community, and they have solutions. You'll have things like Q. And you'll have these ways of just passing things off these libraries that have been written. But it's always hard manual work and a lot of learning and discipline involved. A lot of folks, you'll see a lot of blog posts that will say node is not a silver bullet. And I think you could say that about most technologies. It's not the thing that solves everything. But there's another way of actually looking at the problem of silver bullet. And in this sense, I think maybe node is a silver bullet. In that a silver bullet has a really narrow use case. And there's a sweet spot for a silver bullet, and that is to kill werewolves. And day to day, there aren't nearly as many werewolves running around as Hollywood would make you think. And so the sweet spots that you'd fit, there aren't as many solutions where it does map up closely. And so a more, more of an uglier view of this might be it's maybe not the silver bullet, maybe it's more like the full moon that brings out sort of some crazy in the environment. And I'm kind of off of this part of the talk. But I've seen so many of my friends hurt by this, and I believe that this is the most dangerous technology in the world. And so, and I hope that you take this in the spirit that I'm offering it, that I'm really trying to help and I'm trying to keep problems from happening. So let's now move on to Elixir and how it came to be. So we've got this history of a character named Jose Valim who is here at the conference this week to speak. And so he was a core member of the Ruby on Rails team. And he was trying to solve problems on Ruby. All the, you know, he's a good developer. He's focused on these issues. It's hard to solve problems on Ruby. Like, people would ask for scale. They would ask for concurrency. They'd ask for performance. It's a hard thing to deliver on Ruby. And so he was battling that battle. And he was then reading seven languages in seven weeks. Out of curiosity, who else heard this book? Awesome, awesome. Yeah. And so he got to the chapter on Erlang and he was like, whoa, he was kind of like everyone does when they first hear about what this thing does. And they're like, that's weird. Why haven't I heard of this thing? And so he was reading about Erlang and he saw that it fit these problems of fault tolerance, concurrency, distribution. This scale thing that he had been asked to solve. And so he started looking into Erlang then. It's like, first, like everyone does when they first bump into Erlang, they think, okay, can I steal parts from Erlang and bring it over into my stack? And there are reasons you can't do that that we'll talk about briefly. It's really hard. It's a part, hard problem. And so he's a smart guy. He realized it pretty quickly. You can't do this. It's not going to happen. So he thought, maybe I'll become an Erlang developer. Okay. Erlang is known for one thing and that's it's about developer joy. Right? It's a happy place to be a developer. They hug each other. You know? It's like a good spot. It's not necessarily known for those other things we're talking about. But it is about developer joy. Now you can see the reverse about Erlang. So it is good about solving these hard engineering problems. You've got these Krusty engineers that created the language. Two of them are here this week. So we have Joe Armstrong and Robert Verding, Krusty, Sirius engineer guys. And so they weren't focused on the problem of DevJoy. Ericsson and Stockholm, they weren't focused on that problem. And so Jose decided, I'm going to take, I've got another approach. Instead of trying to rip off Erlang, what I'm going to do is I'm going to write a new language that targets the Erlang VM. And that's the path he went down. It's like, I'm going to build developer joy, good modern tooling, all this on top of this VM that has this amazing track record. And we're going to talk now about the sweet spots. So Elixir, it's also approachable. And it's productive not just in the short term, but in the long term. So it's approachable, productive, and you have modern tooling. So first about approachable, we can look at, you go to elixirlang.org and it's a useful website. It's not just a thing because you've got to have a website. It's really useful of learning the language. And so you jump in there and you go to the Getting Started guide, down on the right here, you see 1 through 21. And you actually have a really good handle of the language just by following those pages. You can start here instead of getting a book and you'll do fine. Then as you're going through that, you can also go to this other link here and find all the meetups around the world. And you'll see a bunch of Ruby people and a bunch of node people and functional programming people scattered in. This is the pattern I've seen city by city. You can go to GitHub, see the code, see the language, and contribute. And you'll have feedback from Jose right on the side. So you have this, you get a pull request and he's sending hearts all over the place. And so this is a nice thing to see this Ruby love showing up inside of this language. He's a great guardian of the language. He's a great protector and he's also very interested in bringing in ideas from all outside places. Another part of the tooling story is Hex. You have Hex PM, it's a package management system. So you can use this for your Erlang code, which you don't have because there aren't any Erlang developers in the world. There's like maybe five, ten, you know, there are a few of us. And so there aren't many. But this Elixir side, there are a lot. And that's a growing thing. And so we go off and we see some of the libraries out here. You won't find libraries that do like three lines of code libraries. These are libraries that take on more serious things that people need all the time. But you won't have your six lines of code on a package. And also if you go to something like Ecto and look at, you'll see that the level is pretty good here. This is the documentation off of a package that you pull down from Hex. Hex is fast. All sorts of good qualities about it. Packages don't disappear on you on Hex. It's just a mutable store of packages. Packages disappearing on you. This of course has been in the news this year. Poor fellow over here, Aizer. God bless him. And so, you know, so he took a beating. And there's a lot of beating that went around. There's a lot of shydenfreude. There's a lot of negative that happened here. But I think it's comment by Tracy Harms is really nice. This catastrophe isn't the fault of a single solitary developer's decisions. And if a complete stranger can push some buttons and break your production deployment, you have some thinking to do. Especially if that was around six lines of code that was really about doing something that the language should have done anyway out of the box. Okay, modern tooling. So we'll have this sort of retro, modern, hipster, bicycle thing here. And so like all modern languages, we have the terminal is the way you do your good work. And so we'll return to the terminal here. IEX, the interactive Elixir shell. When we say IEX, that brings up an Erlang node. So it brings up an instance of the Erlang virtual machine and loads up all the Elixir goodies here. Okay, we go IEX and we are inside of IEX and we can do something like H, a noon. So help. We're going to get help on this thing. We hit dot, we tab, and we get auto completion of all of the functions within the anew module. Or within the anew, yeah, within the anew module. And so we say, I'm interested in what count one does. The count that takes one argument. I'm interested in what that one does. So we get this nice color. We get examples right in the examples. And this is the way you can go through all of the libraries in Elixir and you have this help right at your fingertips. Mix is another tool. IEX is a tool, Mix is a tool. Mix is ripped off from Clojure community. There's a tool called Line again there. You can see some of the commands we can do this. You can think of it, build code, run your tests, scaffold new projects, and it's pluggable. So additional things can come in. So let's say Mix, new, Fizzbuzz. We're going to scaffold out Fizzbuzz. And here we go. We get a readme markdown file, which is nice and modern. We get a getignore built in, which ignores that we would want to have an Elixir project. That's nice and modern. And we are config. And so when we have our Fizzbuzz module and then we have tests, I mean, this is nice. We come in here and this is a nice place to be. So let's go into CD into Fizzbuzz. Say Mix test. We compile and right out of the box we get one passing test. So they already started off for us with a template for us to have tests. So let's now go in and see what that story looks like. Okay, so I have here our scaffolded out code off of our Fizzbuzz. This is just what is... So this is our one test that was scaffolded out for us and that's why we had our one. Now I have a fully implemented Fizzbuzz over here. And we can see it's down here. We play to a number and it does a bunch of stuff that we don't have to worry about yet because we're going to see about the syntax in a minute. The thing I wanted to show here was the experience of saying Mix test. Okay, so we have three tests and one failure. So we saw one test a minute ago. What's that about? And why do we have a fail test? Let's go in here and see what this fail test actually is. So it says here Fizzbuzz play 10 to 16 and it expected a Fizzbuzz here but we had a Fizz instead. So let's go up here and see what that's about. So what we have here is this is not a regular comment. This is a doc test and so right in the middle of our comments we've included examples of usage to make it easy for people to come in and right here this becomes part of our test suite. So we have Fizzbuzz play 1 to 5, it should get this and that became a unit test. Fizzbuzz play 10 to 16 should get this and that became a unit test that actually failed on us because we have a bug down here. We have a pattern match that's too greedy and so what we need to do is move this code up here, do this and then we need to move down here and we need to rerun our tests. Oops, I forgot to save dinner. Yeah, so our test passed now that was based off of that doc test. So I think that's pretty sweet. So another productive nice sweet part of Elixir is the Phoenix framework. So this is a web framework that was built by the fellow who just walked in here, Jose Neim, created the language and Sonny Skragan who is back there and Chris McCord and there's a bunch of awesome people. Chris was here last year, gave a talk, it's on Vimeo. And so Phoenix is amazing but I'm not going to talk about it because there's a lot of good content about it by people here later today. But this is what a lot of people want when they come to a web framework. So Elixir is functional. And so we've probably heard a lot of buzz about functional over the last few years. So a simple way of looking about functional is you have inputs that are transformed by a function and you have then outputs. A goes to B, inputs go to outputs. You don't have, I'm going to come in, I'm going to set the table with all my state, I'm going to set fields, properties, I'm going to have a class, all this stuff and I'm going to call then functions that are going to mutate state and then I'm going to lose track of what I just did a minute ago and I'm going to have all these bugs. So instead you can look at the class, what came in as the input and know what should have happened inside of the function and return. This is important. And this is a good way of getting out of a lot of those troubles that I saw when I was my first trip to Norway. Mutability is another part of functional programming and Elixir takes this very seriously. It has to take this very seriously because the Erlang VM has a hard set of rules that it enforces and this is one of them. So it's not a thing you can turn off. Separation of data and behavior. So this is the thing we're talking about earlier about classes with their fields, their properties, the data and their behavior mingle together which limits the use that you'd be able to handle and use that class in. This is in Elixir you have modules that have functions. You don't have properties fields. So you call into a module and you call a function within a module and it's just the inputs that come in and then the outputs that go out. So you spawn up a new process and you say this process I need to call into this function passing in this input and I'm going to take the output of that and feed that into another function and you just pipeline these things together and that's the way you build programs inside of Elixir. So we have expressive dev joy. Let's talk a bit about the language now. So language, language, language. So we have IEX again and we're going to look at some of the types here. So we have integer, expected, OX2A, that's kind of nice. You know we get our hex right there. We don't get barked at for having seven float divided by three. It just does the thing you'd expect. The thing. You have a type called atoms that are sort of, they're the thing themselves. They're only comparable to themselves. They're just a handle or a constant that even if you have a whole cluster of different machines, this like the atom goat is equal to the atom goat on the server and it's not equal to anything other than the atom goat. Strings are real strings inside of Elixir. You might have heard stories, horror stories about Erlang where it's like, oh it's bad at strings. And it turns out it was a problem with naming, was the problem with Erlang and strings. The thing they called a string is not what people expected it to be. And in Elixir, a string is what you expect it to be. You know you get this UTF-8 binary good thing and it behaves like you would expect it to. Actually it behaves a little bit better than you'd expect it to. And we'll see that in a second. We have lists. You have lots of list processing inside of functional languages and you have tuples. So this is a positional thing. So we see that. Okay let's look at pattern matching for a second. So x equals five. So we got our five there. And we'll say five equals x. That's a little weird. So what we're actually doing here, this is a pattern match. And it's going to evaluate the same. It's happy to say, yep five is equal to x. This is a match. And you can pin x down and say pin x equal five. And it's happy to say that, you know that's it. We say pin x is equal to x plus one. It's going to bomb and say nope, no it's not. Things not equal to six. Left hand side is not equal to six. We have then pattern matches that can be against more complex structures like tuples here. So we have ABC tuple that matches against Apple Banana Cherry. And we see that we then bound B to banana. You know and then Apple and C, Apple to A and Cherry to C as well. But that's what we showed here in the shell. Pattern matching we'll have a little bit bigger of an example. We have list here, one, two, three. See it binds. Does a pattern match sort of is a way of looking at it. But we're actually assigning this thing. So list is now equal to one, two, three. And we say case our list and we're going to do pattern matching here inside of this case statement. So and we're doing this in the shell here. You can have multi-line things. You can have complex bits inside of the shell which is pretty awesome. So we say case list do we say 11, 12, 13. And this is not going to match because one, two, three is not going to match that right? Okay is this going to match? Nope, not going to match because this is a tuple. One, two, three rather than a list. One, two, three. It's a different type, right? Here we have one x three. And this is going to match and it's going to bind the x that came in. It's going to bind the two to the x in the middle and we can then use that later on in the same case statement to do our string interpolation and show this. And we'll see that happen here. So matches and binds two to x. Digit separators. This is something that's coming in C sharp seven I believe. So we've got a thousand. I also can say a hundred. I can score. I can say, da, da, da. So it's just pure candy. I mean this is about the dev joy side of the language. I mean there's just all these things that just make life a little bit nicer. It's not important but it's just awesome though. It's thoughtful. So here we're going to look at enumson streams. So we're going to define an anonymous function. So a lot of times in functions if there's a convention of if it returns a true or false you can put a question mark at the end of it. So that's a nice convention. You see it, you know what to expect. So we're going to define this anonymous function that's going to use the rim function. So rim returns the remainder of two numbers, you know, divided. So this is like a module. And so we say rim, the first argument, so the ampersand one is going to take that first argument that's passed in. It's going to rim that number, comma two, so we pass it in a five. Remainder of five, comma two would be, well, one. And so it's going to be an odd number, right? So here we have a range. We have one to a hundred thousand. And we're going to pipe this forward. So any F sharp people in the room? All right, you'll recognize this. So we've got one to a hundred thousand piping forward into so that each one of those values are going to get pushed in as the first argument to the thing on the right. So this functional pipelining. A new map, and we're going to take whatever the argument was, we're going to multiply it times three, and we're going to pipe that forward into a new filter. And we're going to filter on our odd function that we defined at the top. And then we're going to enum sum that. And we get our answer. Let's do the same thing, but this time we're going to use stream.map. Stream.filter. And then we do enum sum. So the difference here, we get the same result. The difference is, is stream is lazy. And so on our first and our enum, each stage of that, we went through the whole computation and we passed then that on to the next series in the pipeline. With stream, it's dragging one at a time as you're coming through. It's a pull-based thing. And these both work off the same interface. It's called a protocol that they both implement the enumerable protocol. This is like interfaces in C sharp, or the protocol comes from closure, where this idea is directly borrowed. I mentioned strings earlier. This is kind of cool. We can have straight up UTF awesomeness right here. So all the Norwegians with funny characters in your names, you can appreciate this, right? Person equal. So let's look at maps. So we got our person name, Brian, beardy false. I'm only beardy in the winter. Not beardy in the summer. And so we can access it by key, like this. Person name, you can access it by dot, like this. Get the same thing. We can build a new map based on an old map and just push in the properties or the keys that we're going to change or we're going to override. And so we have then same thing, Brian Hunter here. Okay, let's look at structs. Yes? In that part, did you still reference the old object? Yep. Yeah, I could do person and it's still there. It's just that we have a new, everything's a copy. There's, you know, it's not mutating. So good question. Here this builds up on top of that idea, but gives us some extra goodies. So we're going to define this module called person here. We're going to put a def struct in here. We say name, and we're going to give it a default of empty string, and we're going to say beardy and it's going to default defaults. And we then have this data structure that we can start using. So I say a person. I get my defaults. I'm going to say Brian is equal to a person named Brian. So we see person, beardy, false, named Brian. I say person, name Brian, barty, false. Boom. There's no such thing as barty. So we got protected by using the wrong key there. There's no such thing as barty. There's beardy. So it blew and barked and kept us from getting in this all sort of trouble that you've probably all experienced with JSON. All right. Versatile. This is a really crazy thing. So the language, you expect certain things out of languages. Here we have something you would only expect to be able to do in C. So we have the bit syntax and we're going to run through this really quickly. We're going to grab OK BIN data and we're going to read this file. So if we look back on this image, this is a five pixel wide black, red, green, blue, white, and this is the data below. So we're going to read this file, capture it. We capture BIN data. We capture the binary data there to the BIN data variable. And we're going to set up a pattern match where we're going to catch on the left using this DSL called the bit syntax. We're going to catch on the left the data if it matches against BIN data here. And so we'll see how that worked. So here's our data again. Our left two bytes has to be BM in a bitmap. So you can just go through and just think about data, you know, binaries out there and you can just implement using this DSL and be able to process like whatever your protocol is that you're trying to implement or your format. So that was a match. We're going to throw away 64 bits. We're going to capture 32 bit little Indian into a variable called offset to pixels. Throw away, grab five, height one, size we're going to throw 16 bits away. We're going to make sure that the next thing is a 24 here. And then we're going to throw the rest of it away. Now we're going to say that offset to pixels. This told us where the actual pixel data begins. And we're going to remember that and we're going to throw away that many pixels. And we're going to capture the rest of it into a variable called pixels. We do that. So we saw our values that were here. So we have this four comprehension where we're going to capture, we're going to shove pixels in eight bits, eight bits, eight bits into blue, green, red. And here we go. We've got our data. So that's one line, one screen worth of code inside of the shell and we've, you know, parsed bitmap, which is kind of awesome. So something borrowed, a lot of things borrowed. One of the things borrowed is Erlang VM. Erlang VM grew up out of Erxon doing awesome, serious hardcore telecom things that were focused on fault tolerance. So to have fault tolerance, you have to have concurrency. If you don't, process dies, you're fully down. You have to have distribution. If you don't, box catches on fire, you're fully down. So those things are important. When you see the track record of, you know, in fact, your phone works and that over half of the world's mobile traffic is going through Erlang, which is pretty incredible, the data text voice. More recent story is WhatsApp with their victories on the Erlang VM. You know, so just amazing, amazing numbers here, a billion active users a month and so on. And another number that's important to the business is $19 billion that they were paid by Facebook. There's 10 engineers building this mighty code base. We look at what the Erlang VM is in context. We have the runtime, we have OTP, we have languages Erlang, Elixir, List Flavored Erlang. There's a talk here this week on List Flavored Erlang by Robert Verding. And so here's why people can't just grab the Erlang VM, like Erlang bits and pull it into their language. It comes down to this thing as an operating system. And it took a lot of time to build it. It's a serious operating system. It's not a general purpose operating system like Linux or Windows. It's a special purpose operating system about making a safe place for code to run. 250 plus years of development went into it. You can see an example of it being an operating system on Erlang on Zen, the Zerg demo. And this boots up what would take 300 seconds to boot up a Linux VM on EC2. Here they boot up into an instance of bare metal Erlang running on the Zen hypervisor. Boots up in milliseconds, brings up the Erlang VM, runs a web server, processes things, kicks out your request, and you have this result in.3 seconds. Pretty awesome. So let's look at the job of an operating system, which Erlang VM is, process management, interrupts, memory management file system, so on and so on. And think about what our code does, our C sharp, our C, our JavaScript, has a job. And that job is to eat as much core as it possibly can. It's trying to do, it's only worried about itself. It's not interested about the safety of the rest of everything running on the machine. And so the operating system kind of has to hate you. It has to hate your code. It can't trust it. And this is the place where it's really different on the Erlang VM is because the Erlang code cannot do things to hurt the stability. The Erlang code itself has to play by the rules that the Erlang VM set up for it. Erlang VM doesn't have to support C and Java and Ruby and all these languages. It supports languages that only are going to do things that the Erlang VM allows them to. So that trust is nice. When you can cooperate, you can get a lot done. When everyone's fighting against each other, you don't get a lot done. And so this is a big part of that story. So we got our Shaun of the Dead reference. We'll get back to processes. So processes is the way we win here. So concurrency. Concurrency is not the same thing as parallelism. A lot of times those words get used interchangeably, but concurrency is about the structure, doing lots of things at once, and parallelism is about the execution of lots of things at once. This comes in on a story at Ericsson. So when they built Erlang in 1986, they weren't targeting multi-core machines in Stockholm then. This wasn't a thing then. And so they built concurrency, and they didn't build it so that it could go twice as fast. They built concurrency so that they could code without getting twisted up into these horrible nests of callbacky weirdness and into thread weirdness. And they wanted to make the code simple to read and to write, because code that's simple to read and write is less buggy. And that helps your fault-tolerant story. So they did this massive moonshot from 86, and then they turned on multi-core in 2006, and code that had been written a decade before, it did then run twice as fast on two cores, four times as fast on four cores. And like 2010 or 11, there was a test where it had linear scaling up to 40 cores on properly written Erlang code. We get that same thing in Elixir, because it's the same VM, all compiles down to the same beam. So let's look at Elixir and the Erlang VM and the Actor model. So every Actor gets some props. So our Actors are processes. So every process gets these props. One is memory. So each process is one kilobyte on a 32-bit machine. It's two kilobytes on a 64-bit machine. Inside of this memory here, we have a heap and a stack, and it's that process is only. No one else can reach in and grab a reference to memory inside of that process. It's isolated. And it's also immutable, right, because it's functional. And so garbage collector, easiest job in the world to be a garbage collector on the Erlang VM, because no one's, only one person's looking at the memory. You can't change it once it's set. And so each of these processes, they get their own isolated, dedicated garbage collector. So there's not going to be a big stop to WorldGC that happens. Each one gets garbage collected independently, which is pretty amazing. And that's done for low latency, and it's done for deterministic scheduling. Our mailbox, this is the only way a process talks to the world. So last four, Yorick. So we talk to the world through our mailbox. This is the only way a process can reach out. And so we send a message out of our mailbox. We address it to another process. It makes it an NSM box when they go into a receive block. Links and monitors is the thing that we build up on to create some fault tolerance goodies. This is at this OS level, this Erlang VM level, where we get links and monitors, links, basically like I'm going to link to Jose over here. And if Jose dies, I'm going to die with him. Okay? Sunny back here, and there's a different sort of thing. So I'm going to monitor to Sunny back here. So Sunny dies, well, I don't really exactly want to die, but I'd like you to tell me about it. I actually care about him, but I don't want to die. And so it's a different level of commitment there. And so that's these two tools you have out of the box. So process scheduling on the Erlang VM. We have a CPU core. We have a single scheduler that lives on that CPU core. Get three processes up. And each one of these processes is going to get 2,000 swipes at the core. 2,000, 2,000, 2,000. There's nothing in the world they can do to eat more core than 2,000 swipes. They can't block and hold up to show. Everyone gets preemptive scheduling here. So this is another way of visualizing it. They each get this much, and they get thrown back into the rotation. Same thing happens if there's multi-core. If there's big multi-core. Yeah, all happening. So we also have the supervision bit off the links and monitors it's talking about. We have a supervisor and a worker. The worker, something happens. The network card's down. The hard drive failed, whatever. We have some weirdness. The supervisor can then restart the worker. And this is important because what do you do when you call tech support about something, the story is reboot it, right? That's the first thing they walk you through, and it always works. And you're like, oh, it worked again. And so this model is built in to Erlang. It works so well for tech support that they included it as part of the language. So process running, the process running, and boom, he falls into a receipt block. He's going to sit here forever and ever and ever blocking, waiting for a message to appear in his mailbox. No one's talking to him, so he's going to sit here. Well, does that wreck the show? Have we now blocked and wrecked the show? Well, no, we haven't. Because the Erlang VM schedules him out of the mix, and then we schedule then across the other two remaining cores, back and forth. And we don't bring this guy back into the mix until the postman, the Erlang VM, the operating system, until he knows that that process actually has mail. When it does, he's brought back into the fold part of the rotation. This includes things like file I.O. So any sort of I.O. that we have goes through message passing like this. So even though I write file read earlier on that bitmap file, when I said file read, I was talking to another process, and then I waited for that process to send me a message back. I didn't have to do this as a developer. It's just part of file and the read function on there. It's part of GenServer. And so at the point I asked for that file, immediately when I asked for it, that process over there doing the file reading, and I'm removed from the rotation until I have a message in my inbox. So non-blocking I.O. that you don't have to worry about, you can treat like sequential code. It's really done right here. Core cores for schedulers, you got processes, and we have this game of balancing compaction that happens. And so busy scheduler, busy scheduler, I'm not so busy. I'm going to take some work from this guy, move it over here, so if this scheduler can get sleepy and the whole core can go to sleep. And this will happen so that you get this compaction across your cores, and in a server closet you'll have half your cores being able to go to sleep if they're running early, which is pretty amazing for power savings. This core, memory locality, things are building up here, he's getting hot, hot. This one's got some work stealing to make it even even balanced. So get this game where we play and we get massive concurrency, primitive multitasking, soft drill time, low latency. And the low latency is valued more than raw throughput, which is an unusual characteristic for a language. Scheduling, parallelization, and fault tolerance in Node, because you're pretty much left to your own devices here. I mean, things, you know, you're going to build your own operating system, every project, or use something out there that someone else has had to do. So on Node we have cooperative multitasking, and on Elixir we have preemptive multitasking. So cooperative multitasking, it sounds good, right? But what it means is you have to cooperate, and if you don't, you can wreck the entire scheduling system. On Elixir you cannot do that. Single threaded event loop. Single threaded event loop, cooperative multitasking, it's like they read my mind. What we do when we're, you know, we get into this trouble with callbacks, and I know that this is one that you've heard a lot about, and there are a thousand blog posts out there saying there's no such thing as callback hell, it's a myth, and all this. But a lot of people are talking about it, so it's some people experience it. But there's discipline, you can follow all these paths, you can go down this route, and you can solve your callback hell problem and have flat code by hoisting things around and flattening things out, and you know, on and on and on, and all these patterns if you're a diligent person. There's a funny use of words in Node, but we'll talk about that in a second. So Node.js does not automatically manage the number of workers for you, however, it's your responsibility to manage the worker pool for your application needs. So Node's easy, that doesn't sound easy. Distribution tends to be very unbalanced due to operating system scheduler vagaries, loads have been observed where over 70% of all connections ended up on just two processes out of a total of eight. Well, Node's supposed to be easy, that doesn't sound easy to me. Okay, here let's talk about Elixir and simplified distribution, and I think I actually do have time for this demo, which is really awesome and surprising. Okay, so here we have just a simple loop called blabber, and I'm going to move over here and then use, I've got three consoles, a terminal window set up, and I'm going to start up an IEX, and I'm going to pass it S name cat with a cookie of taco. Here I'm going to start up a IEX with the name of dog with a cookie of taco, and here I'm going to do bird with a cookie of taco. So I've got these three named virtual machines, these three nodes brought up here. Okay, and if I say node list, I don't have any friends yet. So I say node connect, and I'm the cat, so I want to connect to the bird, and it says true. So if I say node list again, I see this, the bird, he knows that something's happened too, and he's like, uh-oh, the cat knows about me, so I'm going to say node, not connect, and I'm going to bring the dog into the mix because the dog always saves the cat, or saves the bird from the cat. So here, and I go back over here, and do my node list, and I'm like, oh, so we all know about each other at this point, so we have a full mesh between these nodes, and that's all it took to do this. This can happen across multiple boxes, across heterogeneous hardware, so 64-bit windows, 32-bit Linux, a Raspberry Pi on ARM architecture, and we can have code this cluster formed, and we can deploy code across these. So the code I'm interested in deploying here is this blabber server. So we're going to spawn this process start, we're going to call start, which is going to spawn this process based on the server loop here. This is a really short piece of code here. We're going to fall into a receive block. We're going to wait for someone to send us a stop message. If we don't, after 200 milliseconds, we're going to say, nice, we've got some uptime here, and then we're going to tail call loop into ourself passing plus one. So that's our logic. So let me, L.S., make sure that that code is there. I'll say compile, blabber, just a, blabber, just to make sure that I've got a fresh, good one here. Okay, and I'm going to say, I'm going to store off a PID, equal blabber start. Okay, so I've got my process ID that was returned from that, and I'm here, and I'm saying, nice, zero years of bug free uptime on cat at deadless. Okay, I go over here, and it's like, I'd kind of like to do that too. So I'll say PID, equal blabber start, da, I don't know anything about that, because I'm not over here on bird, it wouldn't exist there either. But what I can do here is I can say nl, and I can say nl, blabber. And so what happened then is I just deployed my code to all my connected nodes. So over here on dog, I can now say PID, blabber start, and it works. And I can go back over here, and I can say, I want to spawn this process on another node. I want to say PID 2 is equal to node.spawn, and I'm going to spawn that on the bird node, and I'm going to spawn it by passing the node name, I'm going to pass in the module name, and I'm going to pass in the function name to call, and I'm going to pass in the arguments, which is an empty list, right? And so now this process is running over on bird, we've got the one running on here, but we're actually getting the IO piped to us. It's running over on this box. If we kill this thing, we would no longer be getting this message on the left saying six years of bug free uptime. Okay, so that's pretty cool, I think. So here, let's fix some code. We've got a bug in here, don't we? What's our bug here? We don't have enough gusto. It's our problem. Our bug here on this line is we need more gusto, we need an exclamation point here. Okay, so let's go over here, we fixed our bug, we're going to ship this code, and so we're going to first recompile it, and so we get this. Our next telcall into this, our bug is hotcode deployed on our running server, and I say nl to deploy this code to my other servers, and notice what's happening there. Over on our other cluster, over on another node, we have that picked up, and it deployed over here to this one as well, because its output is being piped back to us. So this is pretty crazy awesome, is what this is. And this is a short of example that I can show about, this is a basic way of showing how the foundations of how the real stuff works. So we've got to now hurry, because I took the luxury of a sweet, sweet demo. So let's talk just briefly about OTP. So this is where the safety and fault tolerance story really comes in, and it's built up on these basic things that we all get by using the Erlang VM, by using Elixir. We get applications, we get supervisors, and we get gen servers. Gen servers is where we do the work, and these all play with this nice system, and this is proven at Ericsson over the decades on these systems that have nine nines of uptime. So we're talking about milliseconds a year, and this is a common sort of thing. Everything has to be carrier grade for it to be telecom, which is five nines, but you actually beat that with Erlang because of these amazing bits here. So application supervisor gen server, this is sort of looking at the tree of it. So an application spins up and brings up all of the things that need to run in the context of that application. So you'll start up, you'll have supervision, where that supervision is then over here watching our gen server, right? And if that gen server hits some weirdness and dies, it'll restart. If this gen server down here causes some trouble, or any of its peers, we've got a bunch of gen servers being supervised by this supervisor, if any of those croak, this supervisor is going to restart them. If it happens so often, we meet some threshold, the supervisor here could actually be killed. And if that happens, this supervisor is going to bring that back up. And so you have these whole trees of these things depending on what you expect to happen in the code and how nasty you see the problems being. So in Elixir, we code the happy path. We don't have a bunch of try catchy stuff worrying about like, oh, what's going to happen? Because you never can really predict it. You can predict some things, but you should maybe just code it differently. But what's easier is to code the happy path, things go weird, just let it crash. Let the process crash and leave that problem of handling the error to supervision. On this side, different story. Unhandled exception means your application and by extension Node.js itself is an undefined state, blindly resuming means anything could happen. And this is right off of Node.js.org, and this terrifies me. The safe way to code for massive concurrency is to code sequentially. The easiest code in the world is code that goes top to bottom that you can just read. Instead of callbacks are not really easy to grok, so you actually code top to bottom in Elixir code. You have massive concurrency, but you're coding top to bottom. It's not your job to think about all of the plumbing of how the concurrency works. So think about this. You've got 100,000 connected users. They're making all these callbacks. You're running on the single threaded event loop on this single process, and you don't really have a supervision model. So one of those dies. So you've created this attractive nuisance here. Node is this attractive nuisance where people come in and they can just break their ass. You can really get hurt on this thing. We have all sorts of tools that we've been talking about so far in Elixir where you get out of this pit of failure, and you end up in a place where it's just actually easier to build systems at work. It's easier to do your day to day coding. One of those things is being able to code with a powerful REPL, the shell there. So when I'm building my Elixir code, I try it out in IEX. I can do all my things. I can see it work, and when it works, I hoist that up and put it into test to protect me against regressions. So this is like TDD, the part where you actually change your interface by the way you call it. You're getting that constantly inside of the REPL, and then you can then hoist that code out to protect you against regressions. So this idea of testing is big because it's big in the Ruby community. It's made very easy here in Elixir. Made easy by having good docs and then doc tests like we saw earlier. Website called the Node Way talks about, you know, this is protecting you from trouble kind of website. It's a guidance website. Node.js has always been easy to grasp, but difficult to master. They're common fitfalls. Never been well documented. It's up to you. There's a painful trial and error, you know, on and on and on, and it's like, oh my goodness. You know, this is the thing, this is the tool that's supposed to be easy. This is what was going to make us be able to write code more simply and deploy more simply, and it's just not what happens because we are required constant diligence there. And we shouldn't do this to ourselves. We shouldn't do that. We shouldn't allow tools and frameworks and languages to put us in that spot where we're constantly having to be on guard because we're fallible, we're human. So at this spot, we have now traveled through our hero's journey here where you've now had this introduction to this thing, and it's kind of up to you to go off and pick it up and do awesome things with it. And I think you're going to be stunned about what you can pull off and how it's not just easy at the beginning. It continues to be easy. It's easy as it can be, should be, I guess. And so hats off really for Jose over here for this thing he's built. It's absolutely stunning. And so at the end of this, I love this on the hero's journey, that they actually end with this line, it's return with the elixir. So hopefully you all will go and return with the elixir. My two closing bits here are elixir is pure sustainable joy is I think what you get on elixir and you get something maybe a little bit different over here. It's popular, it's crazy, and it's dangerous. And I'm going back to this country here at the end of the week, and so that's the world I live in. So thank you all for being here. We have this over here, so please vote and tweet and go off and install elixir and have fun with it. And then also come right back in here in a minute for our next session with Jan Quy. It's on F-Sharp in the real world, and so the FP track will be in this room all week other than the workshops, which is the hour after lunch. And for those, it will be in workshop, I think it's room 10, but it's workshops and it's where you can just hang out with all the speakers in the FP track and have them walk you through things. And so we'll have Jose, we'll have Joe Armstrong, we'll have Robert Verding, we'll have Matias Brandevinder who ran the F-Sharp machine learning workshop with Evelina. And so awesome bunch in there, I hope you join us for that. And other than that, actually I cannot believe it, I have a minute and a half for questions. Is anyone? Yes? Yeah, so you said I know about the variables. Which? I know it will basically just give for a particular problem. I think so, I think it has that sweet spot. I mean, it definitely has a sweet spot of standing up a website quickly that can handle lots and lots of things, but then, you know, it's like you've got to then have someone actually do the work. That's my question actually. When do you see like, using Node for an API or something, for instance, and seeing if you have 100,000 users, things can get herring, right? They would there. I mean, that would be easy. It's easy to handle 100,000 users on the early VM. This is a sweet spot of the language, that fault tolerance concurrency distribution, that's what it was built for. So back in the 80s, that was what it was built for. 100,000 phone connections coming in, and people expect their phone, they did expect their phone, before mobile phones, with apps. You expected your phone to work, and so you had fault tolerance. So I remember as a kid, when a squirrel would get into the transformer on the power lines and go, and the house, all the power would go out, we'd pick up our phone, and we'd call the power company. And that just tells you about that sort of fault tolerance story, and it's real there, and it's real because of these tools. And so again, that early VM is why that's happening. Like, over half of the world, all of the traffic for mobile is going through this. So massive concurrency. Easy to do. I mean, it's just the sweet spot of the language. A place where I would use Node for, I think maybe what you were getting at, a place where I would use Node is here, where I am using Node. This is Adam. I think JavaScript is a great language for the client. You know, JavaScript obviously works well in a browser. And it's a very good thing to have on the client. It's ubiquitous. It's out there. People know it. You know the problems with it. And you don't have 100,000 people that are getting hurt if my Adam crashes. And so that idea of Adam and Electron, there are talks here this week on Electron, I believe. And so it's a good thing to have it there. It's really nice, the tooling they built up. But I wouldn't want to use it on something where I had to count on fault tolerance. And I wouldn't want to have my company building up on it for business logic. And I know companies that have done that. They got lured in. Their business logic, everything. They ported from C-Sharp because of problems that they saw there. They didn't like what the experience there. They moved over to Node and now they're trapped. You know, they feel like they're in deep. They started bringing in microservices, which is basically the plea for help. So like when Node shops start talking about microservices, what they mean is like, we've screwed up. We're going to have this tiniest bit of code. And it's like, maybe this will get us out of this trouble we're in. Well you think about what functions are and processes with a message passing. That's microservices. That's how you want to handle that problem. And so any other questions? Yes? So functional language, everything is immutable. I suppose that's your functions. Yeah. You have constant commands. Well, this is a really interesting one. So you'll never, ever hear on the Erlang VM anyone talking about Monads. But you have something that, so what is the IOMonad for? So we know in Haskell, if we're right in Haskell, that we want to have all of our code be sane and we want to be able to reason about the world, right? But we still have to have side effects, right? If your code doesn't have, if it can't write to the monitor, if it can't write to the hard drive, it can't write to the network card, you're not actually going to get anything done. So you wrap that then inside of the IOMonad. Well on Elixir and Erlang, the equivalent of that would be port drivers. So all of my Elixir code, my Erlang code, it can only talk to other processes through message passing, right? Can't reach out to the operating system, can't poke through, do anything. It's all in that safe, safe world. So the way that the ugly, scary world is hooked in is through port drivers, those are written in C and they meet a certain interface that makes them look to the Erlang VM as if they are Erlang or Elixir. They look native on that side. But you've got the scary, scary world there and the only thing that they can touch the outside world is a port driver, which is a process which can be supervised. And so, and you don't go off writing port drivers all the time because there's the file system, there's the network card, there are these basic number of things that you want to talk to on the outside world and the way you do that is through port drivers. But I love that question and I forgot to actually mention it earlier, so thank you. Yes? Are there things that the VM or Elixir might not do with that? So I'm always surprised. People talk, well, Erlang is not a great language for doing machine learning, that sort of thing. Well, it turns out it actually is pretty good at machine learning and here's an example of, so you see the F sharp digit recognizer project. So the thing it's not good at, so I'll say that you can actually do this and I'd explore this path of using it for this, the thing it's not good at is it doesn't have a static type system. So there are going to be errors that you don't catch at compile time. This is a place where F sharp absolutely just beats the hell out of us over on Elixir because static type system you catch all these problems. It depends on the kind of code you're deploying and why you're deploying it. So if you're going to a mobile phone, for example, where you have to go through the Apple store for a week review and all this and you can't change your code and you can't get users to install the new one, you want to catch every problem you can at compile time. So that's why I really am excited, you know, like Xamarin, which I have my Xamarin shirt under here, you know, they have this F sharp support and there's also a talk on F sharp and mobile coming up a little bit later today. But that's a place where I would use F sharp for sure over, you know, you don't even have the option of using Elixir on a mobile phone. But I would use it for those places. Now on the server, it's actually a lot easier, you think about this, we did this hot code deployment, it's easier to deploy that new Elixir code to a server than it would be if you had a static type system with DLLs and all that. You know, that was just crazy easy. We changed the code, deployed across this cluster and everything worked. And so it comes down to what your deployment story is. So I think in the short run, like if an app has to work really, really well immediately and it's going to be really hard to update, you know, that's a static type system problem. If it's something, you know, this other side you have to have a system that runs and runs and runs for years with no downtime, the static type system protects you in that first hour. You get your maximum, it's kind of like term life insurance, that sort of thing, you know, there's different thresholds when in the most valuable. Static type systems are helping you the most right when you deploy in that first little bit of the life. Something like Elixir or Erlang, they're helping you if you need that thing to be running for years with no downtime. And I think I am now out of time. So Jan will be in here, so don't get too far away because Jan will talk about F-sharp in the real world and he is awesome. He's an amazing presenter. And thank you all for being here. This is awesome. Thank you.
Node with its sweet-spot of quickly standing up back-ends has caught fire in dev shops around the world. Depending on your business case, that fire can yield a high-fiving “we did it!” celebration, or a charred project timeline with scorched, haggard developers. When is Node OK? When is it dangerous? What’s the alternative? Many seasoned Node developers are discovering Elixir makes a great lifeline when Node turns creepy. They're escaping to a polyglot approach: JavaScript in the browser, Elixir on the server. OK… but why Elixir? Answer: Elixir is approachable and productive like Node, but it’s much more versatile and safe than Node. Elixir is an expressive functional programming language that is 100% "good parts” borrowed from Erlang, Ruby, Clojure, Python, F#, and Node.js. Elixir delivers familiar (modern) tooling, developer joy, simplified distribution, massive concurrency, and carrier grade fault-tolerance. Curious? Good! Come join the fun!
10.5446/51847 (DOI)
Okay, welcome everyone. So I'm going to be talking about Project Orleans. This Orleans framework that I've been devlating since the beginning. So it's very dear to my heart. Who in the audience ever played Halo game? Oh, it's good. So the experience you're dealing with is very different from what's behind the scenes. So instead of explaining what I mean by saying Halo's scale, I have this video that I'd like to show you the beginning also to wake you up a little bit. So let's see if it plays. Halo is a rich immersive story with millions of loyal and dedicated fans. We deliver an exciting and engaging experience to these fans. They need to know what the hot playlist is today. They need to know what the challenges are. They need to know where their friends have been, what their friends have been playing, have their friends gotten more medals than them. They need to know all of this and they need to react to it and interact with their friends in real time. We need to deliver hundreds of thousands of updates per second to millions of players across the Halo universe. We need to get them the right information to the right device at the right time. There is nothing off the shelf that solved the problems we needed to solve at the scale we needed to solve them. So we turned to Microsoft's extreme computing group. Hundreds of thousands of requests per second across thousands of servers in real time. These guys are crazy but in extreme computing those are the kind of challenges we like to tackle. You can probably tell that video was a couple years, maybe a couple years before so I was younger. But I think that Kube gives a very good idea of what actually we're talking about, we're talking about the scale of Halo and those kind of services. So we're gonna be talking about the cloud obviously and people give these definitions of the cloud. By the way we're also gonna be playing the game. I'm gonna be playing the game name the tune. Who knows name the tune? No? So when you see when they see in the top right corner sentence and quotes if you can if you know what song it's from or at least what the band that played it just yell it and somebody who gets the most answers right will get to be here at the party. So there's a prize. So just yell. This one the hardest one I promise you. Anybody knows anybody got the power? No that's actually from David Bowie. Actually it's also my test for the age of the audience just to get a sense. Well no. No Justin Bieber, no Taylor Swift, that no Abba either. You don't want me to sing on stage. Next time. So when we talk about the cloud really like the essence of the cloud is that you get this enormous resources available to you to rent. So that's why everyone's got the power to get almost infinite amount of resources so as you have a credit card you can charge to pay for the services. This power has been available to major corporations or governments for a decade but now anybody can do it. A small startup can suddenly grow from nothing to call unicorn. I hate the term but they call them unicorn. But with great power as they say comes great responsibility. So to build systems at that scale you face new challenges or new all challenges in a new form. Like for example concurrency like who in the audience enjoys debugging multi-threaded code and data races and deadlocks. I don't. I'm just kidding. But now who likes to do that on distributed setting when you have logs from say 20 machines and they try to figure out what happened. That's the water magnitude more difficult than just attaching debugger in finding that deadlock or the data race. So you have these issues of distributing your computations, concurrency in scale, failures are the norm right in the cloud. What used to be happening maybe every few years or a few months now those failures happen every day depending on your scale because machines get rebooted, they get patched. You see it as a failure oftentimes. So there is a set of new challenges that we haven't seen before. And then when businesses look and try to figure out what to do with this. All that glitters is gold named June. Thank you. That's great. One point. So you hear this cacophony of this analysts and consultants and talking heads saying here's the solution. Like for example a few years ago people were saying you see Facebook was built with PHP and MySQL. So if you use these technologies you can build anything. Right? They build Facebook. Soap even if you use even before that web services and soap they were supposed to solve all the problems in the world. All good technologies. Don't get me wrong. These technologies are fine but when somebody says that this technology will help you build a cloud-scale solution I look at it as they're trying to sell you this elevator or if you watched Willy Wonka and the Chocolate Factory like Wonka Vader. But there's a button and you go up and out that solves all the problem. Like for example Go, right? Is the new the hipster language, programming language. Because Docker is written in Go. So again if you write in Go you need to learn Go and then it will solve your problem. Of course not. It's not the case. And then you see other comments like oh you have to be stateless or observation that microsource is as a term a good term a good architectural term got abused too fast. And this is my favorite. So Mary Jo Foley thanks Mary Jo. She said that the release would solve all the cloud problems back in 2010. That's my favorite one. But then you see this picture. Who has heard Kyle Kinzbury talking about Jepsen called me maybe. Great. So if you have never watched go to YouTube search Kyle Kinzbury Jepsen called me maybe. You will not regret. Everyone who deals with the cloud has to watch this talk. It's a brilliant guy. He just single-handedly showed that all this pretty much all open source distributed databases that are available they all don't maintain their guarantees in case of network partitions. He got his beefy machine in his apartment and run all this commercially available open source software in the set of VMs and he recorded reads and writes to this databases while he was partitioning connections between those VMs. Simulating actual network partitions and node fillers. And he shows that every single one like our MongoDB, Redis, Elasticsearch all these technologies break down and violate loose data, violate their guarantees. So he showed this picture of Tirefire. And he explains that the top of it the API level of the databases you have this rainbows and unicorns everything is fine from the API perspective. But if you look underneath under the cover there's this Tirefire of code that doesn't really maintain its guarantees. So you look at that and it's very hard to decide what to do. That that's the reality of our industry. In my view if you stack back there's this triangle of really concerns. You have compute, you have state, and you have connectivity. And there are many choices like you have to make the straight-offs. Who are you, what have you sacrificed in the June? Jesus Christ Superstar because you need to sacrifice something to get something. For example, batch processing is very efficient. If you can afford high latency, if you can process within minutes, hours, you can be extremely efficient by putting a lot of data and processing in the map-reduced way with Hadoop. But if you need sub-second latency that doesn't work. You have to sacrifice this efficiency for low latency. And these challenges and trade-offs go on and on. Like databases, SQL is very good at transactions and guarantees, but it doesn't scale well. So key-value source are very good at partition and scaling, but they don't provide usually secondary indexes that SQL does for you for free. So again you need to sacrifice something to get something. I just highlighted what we were concerned with in the project, but then if you heard of a CAP theorem, I hope everyone heard of the CAP theorem that says that you cannot get consistency and availability at the same time in the distributed system. That's pretty much the axiom. So this is the real challenge we deal with when we talk about the cloud. And the solutions are different. So we can hire hero developers. Years ago at Microsoft in developer division, we had a different term. We had Einstein developers, the category. These are people that can build very complex systems. So somebody built Google, somebody built Facebook, somebody built MSN and Hotmail and those kind of systems. So it is possible to tackle these challenges and to build stuff. But those developers are rare and they're expensive and they're all happily employed. So if you try to build business by hiring a bunch of hero developers that can kind of solve all these problems, you can run out of budget very fast. But most likely you won't be able to hire them because what's for them to leave their job they like and join your company. So in reality, when you try to hire people, you need to look at the available poll. The cool here, a program in Erlang. Okay, there's a couple of people. Yeah, I know, yeah, I did. Scala. Yeah, one person, two people. F sharp. Okay, more. But still a minority. So I have really sincere deep respect for people that master these technologies. Really, like, Joe Armstrong is giving it talks, I think, about Erlang. But if you look at reality, you can hire people. You cannot find people that have these technologies. So try to hire a young, to your company, you'll fail. But also, even at the hero level, these developers are not immune to make mistakes. And the pattern of successful high-scale services, if you look at Twitter, if you look at LinkedIn, if you look at Facebook, they have the same pattern where they rearchitected and rewrote their system three or four times as their usage grew. So they had to throw away the whole solution, essentially, not just incrementally improve, not just refactor, but throw away the architecture and put the new solution in place at the most critical time where the business was growing. And some people, some people argue that that was the failure of MySpace, that the reason MySpace lost competition to Facebook is because they weren't moving fast enough. They couldn't scale with their users and their experience suffered. They were too slow. So I would argue it's not a scalable solution to try to hire more than a handful of hero developers for a company. But then we look at the problem as engineers. But if you talk to business people, they look at it from very clear business lens. They see time to market, the return investment. Those are the terminologies that they use, which means I need to build systems fast. I need to build them cheap. And they need to be reliable, so they're cheap to operate. So capital expenditure versus operation expenditure. That's why my mental picture is those people are trying to sell you this elevator, one elevator, where you push a button and you open the cloud, which is not realistic. It's just oftentimes a bunch of people that don't know what they're selling or the charlatans that are trying to sell you this bridge to nowhere. In reality, you need like a stairway where you can walk or you can run. Because you're in a competition. If you're working, your competition is running, then you're losing the competition. So you have to run to stay in the competition. That's an interesting quote from Alice in Wonderland, but the queen says, it takes all the money you can do to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that. Which is, I think, it was about our business. It's not about Alice in Wonderland. It's not for kids. So that's my mental picture, is we need to stay away. Something realistic, not climbing with the ropes, not a magical elevator, but something real. If we step back and see how we've been building services, we didn't call them cloud services a decade ago, but for 15 to 20 years we've been building them as N tier, 3 tier architecture. This picture must be familiar to everyone, I assume. So you have a stateless layer of front ends. So the web servers that terminate client connections, do authentication, DDoS protection, admission control, and then forward requests to middle tier or several tiers, but still it's a middle tier. Stateless, again, middle tier that goes and talk to storage to pull data in, perform an operation or not, and potentially write data back to storage. So if the request comes for a user profile, middle tier call storage, give me profile for that user, and then I do some update and write back to storage. Or maybe I don't even write update, I return back to the front end to render a web page or respond to the mobile client. So this is a wonderful model, it's beautiful, it's very simple. You can scale easily by adding more servers in the middle tier, more servers in the front end. The problem with storage is much more difficult to scale, especially if you have a database, like a SQL or Oracle database. At some point you exceed its capacity and it burns out, so you cannot scale. And as an industry we realized that a long time ago, so we put the solution, the cache layer in front of it, M cache D, red is, all those solutions, they reduced the load on the storage because now first time you read it, put data in cache, and then after that you read it from memory, which is much faster, you move data between memory and you only go to storage to update. But in reality that complicated the solution so much, now you talk to two storage, you have your cache storage and you have still your call storage, you need to coordinate that and you need to write updates to both. And as you probably know, a cache invalidation problem is one of the hardest problems in distributed systems fundamentally. So programming is not really nice. Really I think this is what we want, we want to have a stateful middle tier where data would be cached but also the compute would execute. So this is what I call the stateful middle tier, have benefits above. Instead of putting data in cache and writing compute somewhere else, can we have them together? Then the tune, anybody knows? Could these be good together? No? The doors? So I would argue that's what we want and that's how we approached Orleans when we started working in Orleans. We really tackled this kind of two challenges. We wanted to have a probate model which is easy and attainable for a wide range of developers. So you don't have to be a hero developer to understand and write successful software with it. But also we didn't want to turn away expert developers, those heroes, they should like the model as well and the model needs to be flexible enough and powerful enough to empower those developers as well. So that's the trade-off between simplicity and power. But also we didn't want to make developers 20% or 30% more productive. We wanted to have qualitatively better productivity and which means three times, five times, ideally ten times more productive. And the main way we know how to make developers more productive is to have them write less code. Because the best code you write is the code you don't write because you don't have bugs there. That's paradoxical but that's true, right? If we can eliminate code from our code base, we eliminate bugs that we didn't introduce there. So that's what we target. A goal was to reduce the amount of code you write but also make the code you write much simpler so you might as well have less error prone and more productive in writing and debugging it and testing it. The second pillar of the project was to make this code scalable by default. Which means if you write code following some simple guidelines, there is a good chance you'll build it in such a way that it will scale. So if you have suddenly ten times more business, ten times more customers or hundred times more customers, your code will work. You may need to tweak a few, optimize a few places but you wouldn't have to go and rearchitect and throw away the whole thing like in those cases like with LinkedIn, Twitter and Facebook. So those are kind of conflicting goals in a way. Who has heard about the actor model? Excellent. So I hope people attended yesterday's talk by Roger Johansson. For those who don't know, you can think of actor model as just the distributed object model. So you have this isolated entities that do not have direct access to each other's memory. They have to send messages to say, hey, do this for me or give me this value. And of course they can create other actors. So the model was invented in 1973 by Carl Hewitt a long time ago and you can imagine there was no cloud and was built for a very different purpose. So Hewitt invented it for, is a concurrency model for single machine, single process systems for AI artificial intelligence application. But this often happens in our industry. Nothing is new under the sun. So that the approach gets rediscovered like in the late 80s and 90s by Joe Armstrong at Ericsson to build Erlang in new implementation of the actor model that they built some control plane systems for their telco equipment. Then later some distribution features were added through OTP. In the cloud space, people rediscovered again this model because if you think of it, because you have these independent entities and they exchange messages, they don't have any assumption of locality. So if I am sending a message from actor A to actor B, I don't assume that they're on the same machine. The implementation of the rhyme time could have been that, that's how it's implemented. But fundamentally the model allows me to run these actors anywhere they want as long as I can deliver messages between them. So it's easy to distribute these models and these modules, these actors. So that's what we took as kind of the base approach for Project Kerlin's. Name the tune? No? Also the doors. So we, we didn't want to just go blindly and look at the models. We, we sort of took an independent approach and we came up with this what we later called the virtual actors. As we work on the system, as we try it and different approaches and through way some of early versions and work with early customers, we realized there are these challenges, fundamental challenges in existing approaches, in Erlang approach and ACCA is sort of JVM clone of Erlang. The fundamental difficulty was that in the distributed highly concurrent system, it's very expensive to write code to coordinate this, this actors. We need to create an actor for user use, the front end receive request for. But what if you have three front ends receive requests for the same user? First they, they need to go check, do we have an actor for this user in some registry? So you do it concurrently and they get a response, no we don't. All of them three independently realize I need to create a new actor for this user. And of course they in parallel try to create an actor and then they need to register in the registry and all of them but one should fail and should handle this gracefully. Of course there's a lot of coordination to get right and of course that kind of code works fine in a simple unit test. But when you're on scale suddenly you have this concurrency and the race conditions and that's what we heard from Erlang developers later that that's actually indeed is one of the biggest challenges to build distributed systems of Erlang. So the idea behind a virtual actor is very different. So the analogy is like virtual memory. When you write code to say touch or update the value in the array index X you never checked with the operating system. Is this memory page in memory or is it in page file? You don't write code you say load this page file for me and then I'll set the value. You just set the value and it's operating system job to realize oh this page was in the page file I'll bring it up let you update the value and then once it gets cold I write it back to page file. So it's the same basic idea. So all actors in Erlangs we call them grains instead of actors to differentiate that actors in Erlangs are very different from what people used to think about actors. So that's what we call them grains. So those grains they kind of always exist virtually. So you can always make a call to any actor in the system so long as you know identity of the actor. And a call generally will always succeed regardless of whether the actor is in memory or in storage or in the process of being activated. All this complexity of coordination is done by the runtime. So Erlangs runtime really performs the heavy lifting. It's interesting that when we discover people equated Erlangs approach with actor model is safe. So when we started talking about Erlangs the first reaction was what you build is not an actor model because you don't have supervision trees. So they thought I was an axiom in the actor model that you have to have a supervision tree to be an actor model which is actually not true. Which Carl Hewitt no less said no that's not the case. He's complaining about Erlangs was kind of similar. We didn't know that to what Erlangs did eventually. So you remove all this complexity of managing life cycle of this actors. Give it to the runtime. As a result you write less code and you write simpler code. So that's how we achieve in the goal of developer productivity. So let's look how the code looks in reality. When I'm asked to explain what Erlangs is in one sentence I say distributed C-sharp which is any kind of two word or one sentence or 30 second description is not accurate. It's not about C-sharp it's just distributed.net but that kind of works for people because you program it with the same paradigm. You have interfaces and you have classes. You start with the interface. So you define what we call a grain interface and you define it by extending one of the marker interfaces. In this case I use eye grain with GUID key which says actors of this type, the grains, they'll have GUID as their identity. And then within this interface you can have one or more methods. One requirement for those methods is that the asynchronous, they return a task, promise for a value. Who is familiar with TPL, task, is sync a weight? Great. So the majority. Those that are not, I think that's the best way in C-sharp 5.0. That's the best innovation. When I talk to JVM people, when the first started talking to them, they didn't believe me that was true. When I will see the code it's just brilliant how it works. So that's the requirement that all calls are asynchronous. Whenever we make a call in this case, like hello world, say hello, the result you get, you get right away. Before anything happened you get is promise. Task of a string means a promise for a string that will arrive later. Maybe milliseconds later, it may be seconds later, but later. So you're not, you're not blocking on this line. That's why one requirement is that everything is asynchronous. So when we invoke the grain, this is an example. I just need three lines. I use static class grain factory and say, give me a grain that implements this interface that we defined above for a user with this identity. So I pass the GUID and what I get back is under the covers a proxy object, the variable user, which implements the interface that I ask for. It's returned immediately. It's constructed locally. There is no messages involved. And then I can make a call in this case user.say hello right away. So the first two lines they will take probably in nanoseconds to execute because they do nothing. You just say, okay, here's a promise for a future result. And then through the magic of a wait keyword in C sharp 5.0, you can say execute the rest of the method when that result comes back without blocking the thread. So this is very simple. The code looks straightforward and sequential, but in reality it executes very efficiently because we're not blocking the thread. We're giving up under the covers compiler reaps out the remainder of the method as the continuation and executes the synchronously later. So that's all I need to write to make a call to a grain. And once I get a response back, once the wait returns, I can do something with this value. When I implement the grain class, it's also very simple. I extend the base class grain that's in the library and then I implement one or more interfaces, grain interfaces that I define. So again, it looks just like your normal object oriented programming unlike one-way message passing state machine and things like that. You just implement interfaces and classes. But notice also that this method say hello has a counter. It increments a counter on the last line. And the reason I can do this without any locks and synchronization is because every method in the grain executes with a single thread guarantee. So the running runtime guarantees that your code never runs in parallel on more than one thread within a single grain. So you always have full control of your private state. You can always assume that nothing else is touching it. So you don't need to put any in the locks, semaphores, any other synchronization mechanisms, which simplifies your code and removes lots of, again, the box. That's the way and that's sort of the reflection of the original idea of the concurrency model of actor model that you can write safe code. Nobody else will go and touch your variables as you execute. Even your own methods will not touch it because they only run one at a time. So what happens behind the scenes, so grain is kind of a logical construct. It always exists, but the physical incarnation of it goes through this lifecycle. So it can be in persistent storage and probably most of the time it's there, not in memory. And only when the call arrives for a particular grain, the runtime gets and instantiates a physical incarnation of that logical construct, we call activation. Goes through initialization, through activation process where if needed, loads its state and calls the method that is kind of like a constructor, say, hey, I'm activating you, do your initialization, and then delivers the request that triggered activation. So for a while, the grain stays in memory. And then the runtime checks what was the last time that a grain got touched, its activation of a grain got touched and got a message to process. If it hasn't been called for a while, and by default it's two hours, but it's configurable, you can set one minute, five minutes for different types. So it was called, nobody send the message to this grain, there is no need to keep it in memory. So really in this runtime, garbage collects. Again, we go through the activation process, say, hey, I'm about to deactivate you if you want to do something, here's your chance, and then remove it from memory. So that's the model behind the scenes. On the color side, you program as if it's always in memory, but in reality, runtime manages, resourcing does this, distributed asynchronous garbage collection of your resources. And I'll stress again with no code from the application, maybe configuration, how fast, how aggressive you want this garbage collection to happen. So if we go back to this picture with this actor-based middle tier, because of this lifecycle, what we're really getting is, what's in memory is just a sliding window of all possible grains of actors. Only those that were recently used now or within that period before they get garbage collected. An example would be like a major game like Halo or Call of Duty, they sold probably 340, 50 million copies. That doesn't mean that all these users are in memory, are active. In fact, you can find very few days in the year when there's more than one million of them playing at the same time. So there is no reason to keep state of 50 million of the players in memory. You can just have those automatically that actually turn on their console and then start the game. And as they stop playing the game or shut down their console, their grain will become cold and will get deactivated. So the runtime does this resource management for you for free. Again, you write in application code for that. So during this runtime, it runs like an overlay over physical resources or virtual machines. So on every virtual machine that you run in the cloud or in the physical machine, if you run on premises, there is one usually process of release runtime called silo. And those silos, they form a cluster automatically and they start pinning each other to see who's up, who's down. If this silo didn't respond to me three times, I suspect it's probably dead. So it does all this magic of tracking hardware status, essentially. So if one of the machines blows up, the runtime automatically detects it and understands what grains were running on that machine. So they're gone, they're lost because the machine disappeared. Maybe the physical hardware failure or network cable got cut. There are many reasons why a machine disappears. What's important is that once the runtime, this distributed logic realizes who lost this machine, it knows that grains are running there and not running anywhere anymore. So you can place them when the new requests arise for grain that used to be there to a different machine. So you can operate with the cluster without that machine for a while. And then if that machine gets repaired or restarts and comes back, enjoys the cluster again, becomes another resource to place these grains and execute more. All of that is done by the runtime again. So you don't need to write any code. Your individual request may fail. So you make a call to your grain and you may get an error back. And you may get an error back for many different reasons, like storage is unavailable or something else. Or the machine just died in this window before the runtime realizes it's dead. You may get a failure. But fundamentally, you can keep repeating this request and eventually it'll succeed once all these conditions get recovered. So you don't need to write code to understand where things run or what state they're on. You just write your code in a simple manner as if they always exist and always in memory. So besides Hello World, let's look at something more complicated than that. Let's see. This is a made up social network example. So I have notion of a friend. So I have this user interface, but I have a method at the friend myself. So notice that in the method signature, I can use iUser as an argument type. So the runtime knows how to serialize these references and how to pass them around without you writing any code. In fact, the compile time will generate serializers to efficiently pass data types and preserve them as if nothing happened as if they're on the same machine. So we define the interface and then let's see how we implement executing this method. So first two lines get two references for two grains, for me and for my friend. Like in the Hello World example, just say give me a reference for this grain of this type, for this identity. And then what I do, I just call my grain at the friend and pass directly this reference. What's important to understand here is that the reference is logical. It's always valid. It doesn't point to physical machine, physical IP address, URL, nothing like that. It just encapsulates type and identity of the grain. So it's always valid. I can save it in a database. I can shut down my system. I can restart the week later. I can read this record and make a call to this grain and the call will succeed because the runtime will activate grain. With identity, deliver my request and execute it and deliver response to me. So unlike physical references, these are logical references that are always valid. One thing here which is not so obvious is we're making horizontal calls. So these grains deliver in the same level in this middle tier. If you go back to the picture of three tier architecture, if I had the logic for one user to make a call to another user, I would have to go all the way out to the web service and make a call to user service and pass target user ID as one of the arguments, go all the way through front ends to another middle tier server to execute this request. Here the whole call happens in the middle tier. So just direct communication between these grains on the same layer. The other thing that is interesting here, we would try catch, but as you can imagine the caller, my grain and my friend's grain can be on three different machines. So how come we can catch an exception here? So here's the picture to demonstrate it a little bit better. So say we have a front end to receive request, made a call to grain A to process this request. It's part of this logic of processing that grain called another grain of a different type maybe on another machine, which in its turn may call to the grain C to do its part of logic. And imagine grain C through an exception. For example, the friend you passed to me is already in your friend list, so you're not allowed to hit him twice. Or this friend, this person cannot be your friend for a very short reason. So traditionally what you have to do, you have to analyze the return result and then propagate the return result back and then propagate it back again. What happens in Orleans, if you write no code 0 for error handling here, that exception from C will be delivered to caller B. And if B has no try catch, it will automatically propagate to A. And if A doesn't have try catch, it will propagate to the original caller. We call clients of code that runs on front end, not within the grain space, we call them Orleans clients. So this exception will be automatically propagated all the way up with no code. I can put try catch anywhere I want. For example, I may decide to put it in C or in B. But by default, it'll be propagated if I put no code. And as I mentioned before, the error handling code is the code that usually is the boggiest, because that's the code that hardest to test. So what we get here is essentially distributed synchronous try catch semantics with the very powerful construct that I can only put code where actually needed. In most of these cases, I can do nothing. I cannot retry or do anything to fix an error. I just need to report to end user that your request fail and here's the error code or description that you see from the exception. So I can do that the front end layer and just render a web page or respond to mobile client. So that's actually very powerful feature up the runtime. So look, another example, still staying within the social network sort of theme. But when you say social network, don't think just Facebook or Twitter or those kind of things. Like gaming, like a multiplayer game is a social network. It's just a much more fluid where these relations are formed for a multiplayer session and they dissolve and then user join different session. This is essentially social graph. If you're talking about IoT devices, it's kind of the same but much more static. You have sensors, you have rooms, you have buildings. So you have this relations, social graph kind of relations. So it's not limited to just a traditional notion of social network. So imagine I need a method to return status of all my friends. Like for example, my stupid UI wants to render a table with friend status, friend status, friend status. And let's say I'm very popular, I have a thousand friends. So if I were to do it naively and call one friend at a time, get a response back, then call another friend and get a response back. Even if the latency of a single call is very short, if it's say 10 millisecond, if I call a thousand friends seriously, the minimum latency of the whole series of calls would be 10 seconds, 10 millisecond times thousand. So of course I don't want to do them. I want to call them in parallel. And that's what's very easy to do, to fan out calls in Orleans. And there's two lines of forage where we call friend.getStatus. And remember, getStatus returns a promise, a task for a result, which we put in in list right away. So this whole forage again will execute it within nanoseconds or microseconds. So it doesn't do anything, it just prepares those messages to be sent. And then through the magic of TPL and async await, we can join this, in my example, thousand promises into one, the task that will result when all of them get responded to and then awaited. So with this one line, we await all the responses. And then once all the responses arrive, we can process results and render my web page, my stupid table with friend statuses. So in the few lines of code, we find out request and the process results very easily. It's very easy to do these kind of patterns. So in the ideal case, our ideal latency is the latency of a single call. But also notice that, again, we didn't put any multithreaded code, no blocking. We do nothing here that would be out of the ordinary. So we write as if it's a single process code in the single application, right on the single machine. But we get a lot of parallelism. So you have enough cores, all these calls will be executed in parallel. So it feels like a desktop app, but actually runs on the cluster. Who's familiar with MPI? There's a few people. So that's a library for very efficient distributed computations. So there's this famous professor, Dennis Gannon, who told me a couple years ago, said, we don't want to teach our students MPI anymore because it's very hard to get it right. With our links, it's so much easier to do it. You can implement the same patterns, but with much fewer lines of code, with much simpler code. And I was so happy when Carl Hewitt, the inventor of the actor model, wrote this thing last year in his paper. So he said that in his Orleans couple paragraphs, he said that it's an important step in the goal of the actor model that application programming need not be so concerned with low-level system details. So that's exactly what we try to achieve, to raise the level of abstraction, to make developers more productive and code the right simple. And I think I tweeted at the time that I'm ready to retire and they checked my savings account and decided to stay at work not ready yet. But interestingly enough, in another survey, Carl Hewitt pointed to Erlang's deficiencies, lack of error propagation, which I showed you. It's exactly what we put in Orleans, not knowing that I was his concern and also lack of resource management. Those are these two complaints about Erlang, which you see without knowing that we implemented exactly those things in Orleans. There are many features, like I just highlighted a couple of them. So there is declarative persistence. You can declare a state for your grain class as a property-back class, just very simple Poco class. And then you pass it as a type argument to the base class grain when you declare your user, in this example, user grain class. And then you get this method, the state property of the type that we declared as the Poco class. And you have this, usually you use a single method, right state is sync. This is where you say persist my state. I said my property is persisted to storage. And how it works, there is a plug-in model, there are persistence providers. So you don't have to write code against specific storage, like Azure Blob or SQL or S3 in AWS. You just write a single line and then provider will know how to deliver this state update to specific storage. How you link them is through this attribute. They say I want to use a provider with this name. And then in the config, you can declare that this name is for Azure Table Storage. So you can change your storage that you target the code to that changing application code. You may need to migrate your data if you decided to move, but you don't have to change your code at all, you just change your config. But this is an opt-in feature, you don't have to use it, you can just write code where you talk to storage directly yourself. So it's up to you, just a convenience feature. So we included a few providers with the code base, but there are others that are built by the community for storage, we wouldn't even consider building ourselves. Another feature that we added maybe a year ago, slightly more, came from this need that when people used Orleans and see this RPC pattern, when you call and you get a response, a request response, a remote procedure call a pattern, say well I want to return a series of values, or I want to subscribe to values that somebody will produce. I want to produce a series of values. So you're talking about streams. And we had a stream API, which is a single API over different delivery mechanisms. So there are like three categories, there's direct TCP messaging, where you just want to deliver this asynchronous updates directly over these connections between silos just by sending messages, no persistence. Or you may do the same over durable queues like Azure queue or SQS, Event Hub. But actually Event Hub is in the third category, it's Kafka and the Event Hub there in a different category by themselves, because they're not really queues, they're distributed the partition logs, where you can say I want to go back to this offset in the log and redeliver messages from that point. It's a very different, very powerful model. But we have a single API that works across all the three of them. I would say it's a controversial decision to have one API over three, because they have different enough semantics. So we questioned that decision, but that's what we did. We put a single API and if you look at how it works, I call provider again by name because it's config driven like with the persistent provider. And then say give me a stream of integers with this ID. ID is a good. So again, like we took the virtual actor model and made it virtual streams. So as long as you know identity of the stream, you can always produce or consume from it. You don't need to create it. You don't need to find it. Just say I want to produce a stream with this ID or I want to consume from a stream with this ID. So you have Google it in the namespace. It's easy to model things like user and user ID X or device and device ID Y and then produce or consume messages. And you produce by just calling onNextAsync. We model API on RX or async version was supposed to be coming. That's not a controversial decision because we took naming from RX which may not be obvious or the best choice for naming. We just try to be consistent with RX. Regardless, you put the call onNextAsync and produce a value or you can produce a batch of values. And on the consumer side, you define your handler which will be invoked and you subscribe say for that stream I want to subscribe my handler. So it will be invoked when every value, every event on the stream arrives. And that's it. So it's very few lines of code and again those streams, they're virtually exist the whole time. You don't have to do anything about managing them. But also the streams work not just between grains, they work between the client like the front end and grains in both directions. So there is symmetrical model. So if you have front end that terminates WebSocket connections or MQP connections, it's easy for the client to subscribe to event streams from grains and deliver updates in this low latency interactive scenarios. So that's what it was built for. There is a lot of complexity on the hood to make this work. So there's sort of pooling agents. You need to distribute work. If you run a cluster of 100 nodes, each node needs to pull from queues if you're using a QVN hop or Azure queues. And if machines go up and down, you distribute this work, you need to cache this to be efficient. So there's a lot of complexity there to deal with that. And again that complexity is done primarily by the runtime. So the application code can stay simple, but the performance will still be powerful and robust, dealing with failures and redistribution automatically. For that we'll leverage a bunch of other Orleans capabilities. Try it smoothly. Name the tune. Anybody knows when tomorrow comes? No? Eurythmiics. I'm gonna drink beer with myself. So when we built, years ago, we built the first couple applications on Orleans and we started talking internally to product groups and saying, look, this thing seemed to work. But as usual, like people in our industry are skeptical. They look at it and say, it's too simple. It's too good to be true. If it's that simple, there's probably like a lot of things it cannot do. So there was like a lot of disbelief until this guy came. What happened, the hoop somewhat, and those guys, they came to us and say, look, we designed this architecture for a future services. But then we discovered Orleans, the paper and looks like you implemented 80% of it and much better and deeper than we thought we would. So why don't we join forces and work on the remaining piece? Oh, and by the way, we need to be in production in three months. And this is where like in the code and video is real, true story. I turned to my team and said once they left the room, these guys are crazy. If you want to take technology from Microsoft Research and put in production in three months, I don't know what they're thinking, but let's drop everything and help them be successful. And we put this service in production, the first service for, I think was Halo Reach University Edition in three months and it worked fine. We worked out a couple of bugs after launch, but nothing broke experience. And they decided that this far exceeded our expectations, we're going to standardize in Orleans for the next major release, which was Halo 4. And so Halo 4, all of the services for it were billed with Orleans within six, seven months with very small teams. So it was very productive, successful launch, high scale, all of that. We proved that it worked. So that removed very much all concerns that Orleans is a toy, it's too simple. People were saying if it works for Halo, it must work for me. And because I'm smaller scale. And then we had this other gamers came. Anybody played Age of Empires, Halo Castle Siege? So the back end runs on Orleans. And of course in the fall we had Halo 5 release, which was very smooth. We were asked to be on call for the weekend and on Friday we said we're told nobody needs to come, it runs smoothly. I think it's good. And then came kind of non-gamers. We have a couple of services for build for Skype. We have several services in Azure, monitoring and security. Those fancy IoT project were launching this device into stratosphere at 40 kilometers and very high. This application that you have on Windows or on Windows phone, if anybody has a Windows phone still, they may not look as sexy, but they all have hundreds of millions of users behind them. So it's still a lot of scale, a lot of data to deliver. Another game, User War, it's going to be released this fall. It's also using the same back end. We never designed our links for gamers, which is like paradoxical, people keep asking, oh, you build it for Halo? No, we didn't build it. We didn't even have it in mind. They came to us when we had the system. But I think why gamers come first, it's typical in our industry because they have a very different environment. They're always on the bleeding edge. They're always under a lot of pressure. They always rewrite a lot of code for the next release. And they're always on forgiving. So they have this spike on the first few hours, few days of launch, which is very different from any other service. Whatever you hear about Snapchats and whatnot, they have the user base growing over time. So they have time to fix things up. Things don't scale. If performance drops down, they have time months and years to improve and even rearchitect. If a major game is released and has a problem the first few hours or a few days, you lost the business. This user will just trash and it's just unforgivable. So it's a very risky business. And also the economics are shifting. So this business of selling DVDs or best buy and other retailers is slightly going away. So it's moving more towards virtual goods, virtual currency, content deliver through cloud. A lot of logic goes to the cloud. So they need to be in the cloud to stay in business, stay competitive. They're good customers to work with because they're very fast, very ambitious. Name the tune? Great. So when you talk to, on read analysts, they talk about quadrants, major quadrants. I thought, why can't they have Sergey's major quadrants and then define my own? Yes, it's also a queen. So interactive entertainment goes beyond gaming. You have interactive TV, other similar types of applications. When you have sub-second ladies requirements and you have high-scale, you need to deliver things tailored to a specific user and analyze things on the fly, which kind of bleeds into near real-time analytics, has a different angle but similar requirements of getting data and quickly making decisions. And then, funny enough, to look at fraud detection. Fraud detection for credit cards is not that different actually from cheat detection games. Very similar approaches. IUT is the hottest area. So that's why I put this, I think, subconsciously as the red. And we have this project I'm most proud of that people build Orleans. This thermostats for Honeywell, Ryan Orleans. The project where literally build a system to control up to two million of mouse traps because the company services other businesses with mouse traps. And they need to know when they need to go and come when the mouse is there. So it's funny IUT project. And the other one is this green power storage facility in Hawaii, in Oahu, which stores up to half a gigawatt of power, which some people wrote it's like a small nuclear power plant, but it's just storage for wind turbines and solar panels. But there are many more things that are possible. They showed with these patterns and they show you just the glimpse of it. There's much more that is possible. You can build all kinds of scale-out compute applications with these primitives. We open source Orleans in January 2015. The experience exceeded, far exceeded all our expectations. So it's a very different experience. Thank you. Yes, it is from Sting. If I love somebody, set them free. It's just a great experience of dealing with all these people out there that collectively are much smarter than you are. So you have to be very humble once you go through the experience because you can never be as smart as all of them. And they are all passionate. They come there because they want to contribute, not because somebody asked them to. And that helped hiring. I had no problem hiring five people just last couple of months because they say, look, you'll be paid to work on an open source project and you'll be building your GitHub profile for your future employees. The best deal, I think, in town. That worked. That also will help move Orleans to Corsiolard and make it cross-platform because there are people that want to do this work with us. So we don't have to do all the work. We have to coordinate with the community. A lot of work can be done by the community itself. One important thing about Orleans is that it runs everywhere. It's not locked into Azure. There's this misconception that Orleans is for Azure. No, it's not. You can run anywhere. You can run in your closet and your garage and some hardware that you purchased of eBay. You can run AWS. Some people do that. You can run anywhere. And it's not tied to anything. So this flexible configuration and provider models are not constrained by where it runs. And usually Microsoft is viewed as fast follower and a lot of technology. I'm proud to say that in this case, JVM people were fast followers. So there's this orbit JVM clone of Orleans. And they told us very explicitly, they wrote about Orleans they heard of it and they got blown away by the model. But because they were JVM shop there, Bioware is one of the electronic arts companies. Just implemented the same model in JVM and they like it. And Roger, Johansson was somewhere here. He's trying to do something similar in Go. We moved out of research. Great. Thank you. People know it. We moved out of research about a year and a half ago, but we continue to product group, but we continue working with research. It's just a couple of projects that we've been doing recently. One is geo-distribution. So all pictures I was showing there are about a single cluster of Ryan and Orleans service, cluster of machines. So we went further from a single node, single cluster, multi-clusters. Instead of one cluster, you run this kind of a constellation of clusters. And you can geo-distribute them. You can put them in different geographies for locality, but also for availability. So one of them goes down. The model stays the same. So your program again, against these grades, they're always available. The fact that one data center went down, that shouldn't be your concern application logic. The code should work and the grain will be reactivated somewhere else in different geography if needed. But you also can serve your local customers from the nearest data center automatically. The famous Phil Bursting, who co-invented asset transactions, he is working on adding asset cross grain transactions to Orleans. And they're very far enough project and have some very promising numbers. We have other optimization paper was in the U.R.C. this year in London a few months ago, published in optimizations. But looking why I think why this model works, I would say that there are just a couple things that need to consider. So one is this contextual orientation, because you have Orleans model works when they have lots and lots of independent contexts, like users, sessions, devices. If you have this kind of application requirements, this is where the model works. If you want to distribute databases, I would advocate against using Orleans for it. When you have lots of rows and you need to write operation that goes across them, that will not be efficient in Orleans. But when you have this independent context, it's easy to scale them out, it's easy to express the logic in this isolated manner of actors. But also I would argue that this approach brings object-oriented view back. And I'm arguing that it's more natural. The world is not service-oriented. I use an example when, say, in African savannah when the lion is talking to gazelle through his clothes and teeth, he's not talking to gazelle service IDX. These two actors interact independently from other lions and other gazelles in the savannah. So that's the reality of the natural world where things are not service-oriented. They're object-oriented, distance-oriented. And this model fits it well. In the paper we have a graph that shows that we scale linearly and actually numbers now are 50% higher, but that's the graph from the paper. So if we get back to this business requirements picture of time to market, return investment, I would argue that more or less we hit the first three requirements. So a whole-page demonstrated developed productivity, linear scalability, you can find details in the paper. But I also didn't touch a lot on high efficiency. Our lean score is very efficient. That's why we build our own serialization layer, one of the reasons. So and people measured against some competition and found that's, I forgot, 23 or 26 times faster than something down there. So we didn't sacrifice efficiency for simplicity. That's why I think we didn't solve the world problem, but I think we gave enough tools to address a class of applications in a very easy and very powerful way. That's my claim to you and I would encourage you as a takeaway to take a look at our lanes. Take a look at open source if you've never done that. If you are a JVM, look at orbit. Even if you cannot apply these kind of technologies, maybe the approach will resonate later in your work and when you build your system. Kind of learn from our experience, from our mistakes, but also learn that questioning established wisdom sometimes pays off. So you don't have to have supervision trees, I would argue. So that's all I have for you. Thank you and if you have any questions I can answer now or later. Question is what the relations with service fabric actors and other related differences. Yeah, so first of all, the service fabric, the whole name conveys that service fabric is about service model, about distributing running services, managing services. That's the primary reason for service fabric. So it releases about implementing services. So yes, service fabric includes some libraries, I got a column program models, but they're more like libraries and one of them is actor oriented. But even though the APIs are very similar, the simple APIs, in reality the implementation is very different because it's built to highlight other features of service fabric, for example replicated local attached storage. Well in Orleans it's all remote. Partitioning story is different, the placement is different, there's a lot of differences there. So that's why I would suggest you look at those differences and see which case works for you. The question is, my insights into why service fabric team decides to build reliable actors. Like I said, so the reliable actors highlight features in service fabric, that specific to service fabric, for example is replicated storage and in general in memory replication. So you have these features, you have to leverage these features, you need to write service code and they're different like they have, stateless services and stateful services. They just added this third model that actually can leverage these features in different way. So I think that's the biggest reason to kind of showcase the features of the underlying infrastructure. Any other questions? When you run the system, can you run this on premise, on the machine, on the cloud? So the question about hosting your lanes, can you run it on premises, on the single machine, in the cloud? The answer is yes to all of this. So yes, you can run it on a single machine. Especially like developer experience with F5 debugging is very easy because you run two nodes within app domains of the same process where your client runs. So that makes it very easy to debug and develop. So you can deploy a single machine because it's just a process you start, and the configuration you give it. You can run on premises. In fact, our nightly tests are performance measures and reliability tests. They run on private cluster. Some hardware that we inherited for some reason. It's no problem because it's really about storing membership information, which we recommend to use Azure Table anyway because it's very cheap. We'll write just a few lines there. You'll pay pennies a month. Even if you run, you can do it even if you run on premises. That's how we run our tests. So we run on private cluster, but we store membership information in Azure Table. So then moving this code to say a worker role or to a scale set, VM scale set, is very, very easy because the whole mechanism stays the same. Any other questions? So instead of questions about the messaging and delivery guarantees, the messaging between actors is over TCP between two nodes where those grains are on or single node if they're together. The guarantee is at least once, but we don't, we had to retry logic and it's there. You can enable it, but we turn it off by default because in case of failures when you keep retrying and deliver messages that can get there, you just exacerbate the problem. So we usually don't recommend to apply retry logic. But there's also the one thing I didn't mention. There is a built-in timeout. So when you make a call to a grain, internally a timer starts and when there is no response within the set period of time, then you get a timeout exception. So either your message get delivered or you get a timeout exception. That's the typical case. So when there is no failure, you get a response or maybe an exception, which is fine. But the retry, we recommend living to the application logic because in many cases you don't want to retry. You want to do it once and if it failed, it's too late to retry, for example. So it's in memory. It's not queued. It's not persisted unless you use streams. So streams can go over persistent storage. But messages, general messages, then method calls go over TCP. Does that answer your question? It's a comparison with persistent queues. I think it's the throughput question. So all persistent queues, they have limits on throughput and latency are both. So this model is the most performant because you don't write to any storage. You just send it directly. But if you need to, if you need to guarantees, then you can use streams and go over persistent queues as easily. So that's sort of one of the trade-offs you need to decide on early on. But you can change it, of course. You had an example earlier with the streams where you added a handler to a stream. Does that happen inside a grain? Yes. So if that grain goes to sleep, does that handler then disconnect? Or is that kept somewhere that gets all the new messages? Excellent question. About handlers that attach to a stream, when you subscribe to a stream, what happens with its handler if grain goes out of memory, is it persistent or not? It is not persisted primarily because we couldn't serialize delegates and we couldn't do the magic work with one at first, but it was not possible to do. So the typical pattern is there's this method on activated sink, which is like a constructor of a grain. We get called when grain is activated. So this is where you put the logic to re-touch your handler. So when your grain, when the message arrives and the grain is not in memory, it gets activated. The method gets called, you touch your handler and then the event gets delivered. That's how you... So you keep track of this grain cares about this stream? Yeah, you need to persist that you subscribe to the stream and then re-touch your handler. You have to do it, unfortunately. Any other questions? Well, thank you then.
Orleans is the next most popular open source project of the.NET Foundation after CoreCLR/CoreFX/Roslyn. It wascreated in Microsoft Research and is now developed within Microsoft Studios. Orleans has already redefined how modern scalable interactive cloud services and distributed systems should be built by introducing theVirtual Actor model. Orleans has been running in production since 2011 powering high throughput cloud services forHalo Reach, Halo 4, Age of Empires, Skype, Azure Monitoring, and several other Microsoft products. It is a bedrock technology for the cloud services ofHalo 5: Guardians. Since the public preview at Build 2014 and going open source in January 2015 Orleans got used by a significant number of customers ranging from small startups to large enterprises. At the same time, it attracted a group of talented engineers from companies around the globe that have formed a vibrantcommunity around the project. The Orleans core team maintains strong ties with Microsoft Research to keep the stream of innovations going. Come hear how you can leverage Orleans today, what's been recently added to it, what new functionality is coming soon, and about our future plans.
10.5446/51841 (DOI)
Okay, hi everybody. My name is Runegrin Sir. I'm here today to talk about sensitive data in the cloud and the fact that you can't do that. Or at least that, the general opinion. I'll be talking today about writing custom software, not software as a service, Google Docs, Office 365, because that's boring. I'm a developer and writing code. That's what I like to do and that's what I like to talk about. So, much of what I'm going to say today applies to software as a service as well. That's not going to be my focus. But before I start, I want to say this is my opinion, not my employers. I've got some colleagues here. Because this is a kind of controversial topic actually. So, this is based on my experience, my opinions. And actually, the idea for my talk came around when I started working with sensitive data in our organization. We write software for the Norwegian hospitals. We're focused in the middle region of Norway, but we support all hospitals in Norway, even some foreign ones. And the software we write, it handles your medical information. So, it needs to be secure. At the same time, we see that the cloud offers a really nice feature. It's scalability, economy, availability. There's lots of good stuff here. So, while we really want to go there, the general opinion has been, you can't do that. So, that's what I'm going to talk about today. But before I start, I want to know who's here. How many of you are software developers? Write code? Yeah, almost all of you. Cool. Are many of you Norwegians? Yeah. Because at least one of the things, I'm more or less trying to skip it, the boring stuff, the law. But I only know Norwegian law after all. So, I won't spend too much time on it. How many of you handle sensitive data? Yeah. Anyone doing it in the cloud already? Almost nobody, exactly. Yeah, that's how it used to be. So, my topic today, sensitive data in the cloud. Can you do it? From the discussion we had at work, what I see on the internet, this is a really hot topic. I see lots of you here today. So, I suppose you agree. And at least, from what I'm in my organization, the consensus is, you can't do that. The cloud, it's not safe. But what I'm going to say today is that you can. So, for those of you who want to go into one of our other talks, that's a short version. You can do this. So, hurry up. There's lots of interesting stuff going on. So, that's my opinion. And I try to argue for that. But you have to know what you're doing. And this is my opinion. It's also, it's not just me. For instance, the Norwegian data protection authority, they're called data to see in Norway. They actually say that the cloud, it's not only safe, but it's probably safer than running on premises. And the Norwegian government, they're working on changing the law now because they want us to use the cloud. They see the benefits of cost flexibility and safety. So, this should be quite easy. But it's not that easy after all. So, if the authorities say you can do that, what is the problem? I think you all have an opinion on that. In essence, hosting an application in the cloud or on premises, you need to do the same thing. You need to secure it. And you have the same problem. There is a risk that somebody can access your data that you don't want them to do. They shouldn't be able to access your data because it's secret, it's private. And what's the problem with the cloud? It's got the same problem at home in your local data center. But the cloud, it's a shared platform. You're not hosting your data locally. And you don't control the environment. And it's very hard to have full control of what's going on. So, you have to trust somebody, an external part, your cloud provider. And also, data from the cloud is transferred over the internet. That may or may not be what you do today. But at least for my applications, handling medical data, we're running a more or less secure network, a closed network. So, if I want to move my application to the cloud, then I have to transfer it over the internet. And that's scary, very dangerous. Also, in the cloud, you don't know exactly where data is stored. Absolutely do not know which hard drive is stored on. You don't know which server, not which server room, maybe not even which country it's in. Most providers let you select the data center. But where is that data center? You've never been there. So, at least there is a perceived loss of control here. I'm not saying that you lose it, but the perception is that you don't control the world the same way as you do when you run locally. But the greatest problem, at least the way I say it, is that the cloud is new. It's been there for years, but this is new stuff. We're not comfortable with it yet. And that's what I see as a main problem, because there is a myth that the cloud isn't secure. I call it a myth because it's not true. But we all believe it. At least many do. So, when I say it's a myth, that's when I focus on the big cloud providers. I talk about Amazon, Google, Microsoft. So, the big ones. If you're looking at the hordes of small providers, calling them some cloud providers, honestly, I don't have any experience with them. If you're handling sensitivities, you should be careful. Go for the big ones. And from what I've seen, the security is really nice. So, I'm going to go back to that. But first, let's talk about what sensitive data is. I just need a drink. Did I know you go to the shrimp cruise last night, by the way? Yeah, it was a bit too much fun. My voice isn't exactly where it should be today. I hope you forget that. So, sensitive data. It's information you don't want to share. It's whatever you... It's your secrets. It's private. It could be economic information, health, of course. It can be a private discussion in your position, your company's economy. Anything that you don't want others to see. And it doesn't mean that we have something to hide. It's not that. But it's private. As long as it's private, then it's sensitive. But there are various levels of sensitivity. Not all sensitive data are equal. So, we need a background here. There are several levels of sensitivity. Usually, we talk about three or four. First, you have the directly identifiable data. This is the most sensitive data, the way I see it. It's when, for instance, in your database, you store up a personal ID, say, a social security number, to get away with it, a diagnosis. So, you say, this person has this medical condition. Combine them to not necessarily the same database, but if the data are available and easy to combine, then it's directly identifiable. You know who the person is. You know who the company is. You can point to someone directly. Then you have what is called, I call, indirectly identifiable data. This is information that it can look anonymous. But as soon as you start to combine it with other data sources, then you can get more information out of it than you intend to. For instance, if you have a diagnosis, and you don't store a social security number, you don't store a phone number, you don't store a name with that data. But maybe you have a person's age, a weight, gender, maybe even which part of the country that person lives. Without that information, and if it's, say, for medical, it's a rare diagnosis, then you may start to find out who that person is. And then you're exposing sensitive information without meaning to. There are several examples of people doing this. But you have anonymous data. That can be the same data source, but it's aggregated over a group of people. Usually you have a rule of thumb. If you have more than five people in an aggregated, then it's anonymous. You can't find out who the person is. It depends on what you're doing, depends on your data. But that's a simple rule that you can use. So even aggregated data can be sensitive if you have a small enough group. Finally, you've got information that isn't sensitive. Say, for instance, public information. Your name, your phone number, where you live, that's not sensitive information because it's probably in the public data sources. It's easy to find on the Internet. Also, test data and all kinds of that kind of stuff. It's not sensitive. So the more sensitive your data is, the more important it is to protect it properly. And you need to think of that. I'll give you an example. It's a fun story. In 2013, there was a released data source that contained all taxi rides in New York for a whole year. And it contained no personal information, they thought. So they had the taxi cab's ID. There was a date and time when the ride was done. It was where the ride was from, where it was to, how much was paid, did a passenger tip. But there was no information, there was no name, no number. It didn't say who the person was. So the smart people on the Internet, they took this data and started Googling pictures of celebrities on taxi drives in New York. And they found pictures. And they could see, they found pictures of celebrities working a ride at a certain moment. And they were able to find the data in the data source. The same place, same time. Then they knew who it was. So you can see that this celebrity went from there to that and didn't tip. But they could do more with that data. They started aggregating it. And so they found frequent trips between addresses. For instance, trip bars. To private addresses. And when they looked at those private addresses, some of them, there weren't that many people living there. And your address is public information. So they were able to find out who that person was. Several of them. And they were on Facebook. Only this anonymous data source identified very private information about people. So while your data may not look sensitive, it may well be, after all. So you have to think. Now how do we secure our data center? I love this picture. That's how it looks at home. We have this myth about insecure cloud. So how do we secure? First locally, how do we secure our local data center? There are many people talking about this. I won't get into the details. But normally you will build a secure data center. Probably underground, in a secure bunker. It's got limited access. Only this and that. You too have access. The rest of you can't even access the room. So it's physically protected. All of your servers are redundant. You probably have two data centers. So even if this data center stops, this one is still working. So you've got to redundant security. Then you set up your network. You use reverse proxies. You use heavy duty firewalls. So you make sure that only there is no inbound traffic. Everything goes out. You make sure that you have full control of what's going on. And then you set up your application. You follow all the best practices and guidelines doing proper authentication. Probably two-factor. It may be even better. You have full control of authorization. And you secure your data. You encrypt your storage. You use HTTPS, TLS, all the good stuff. You follow all the over spread commendations. So you secure your application. And you're happy. You're secure. How is the cloud different? Can't you do that in cloud as well? And you can. But there are some differences. So let's talk about the bad stuff first. The cloud, it's a shared environment. You don't own the servers. You don't own the environment. You share it with other tenants. So you don't control the data center. You don't control who can access your data center. You actually don't know who can go look at your servers. So you have to trust. And you don't control where your data is stored. As I said, you don't control the machine. You don't control the hard drives. You don't control anything. And your application, it is not running locally. And if you're offering a service on internet, that's business as usual. But if you're running in a closed network or a limited network, this can be a challenge. And you must, I can't repeat this enough, you must trust your cloud provider. You need to trust them. You need to be sure that they won't do anything wrong. You need to trust that they are good. You need to trust the certifications. You need to be sure that they know what they're doing. And on top of that, at least if you have an existing application, it probably doesn't fit in the cloud. That's a different talk. I won't get into it. But in some cases, it can be terribly expensive, actually writing your application to work well in the cloud. But there are lots of good stuff in the cloud. First of all, it's a shared environment. It's not just bad. It's great. Because you have proper separation between tenants. It's built so you can't access your neighbor's data. It's impossible. But this shared environment gives you flexibility. So it can be a problem, but it's also great. And this data center is run by huge corporations. Instead of your local data center, you're running 10, 100, maybe 1,000 servers. You're big. These guys, there are millions. I heard that in Azure, Microsoft, I believe there's one administrator per 15,000 servers. And they're running millions of servers. But they don't really spend too much money administrating it. But they spend lots of money, lots of resources on securing this. That's what I live from. In the cloud, you can usually control which region your data is stored. Like Azure, they have the data center in West Europe, Northern Europe, several places in the U.S., Asia, all around the world. And you can say, I want my data to be stored there and only there if that's what you need. Also, the challenge with network traffic, your data being on the Internet, there are ways to secure network traffic now, actually. Now, there have been lots of problems with traffic on the Internet, encrypted traffic being hacked. But as long as you're up to date on the latest standards, then you're good. And also, the cloud, it's a rapidly moving target. You get the new security features all the time. For instance, again, Microsoft, they just launched SQL Server 2016. It's been running in Azure for a long time. And the new features there, in Azure, you've had them. But you just got them locally. So they're updating the cloud much more often. So you get the new features much earlier. So there are lots of good stuff. And all the flexibility, it's really useful as well. There was another example. In 2010, Stone Ages, there was in Denmark, a company called Barnard-Donmark. You probably haven't heard of them unless you're Danish. They moved all the services from a local data center into the cloud. I've still read some articles about that. There was lots of discussion, is that smart? Is that the risk? They moved to the cloud. Then that winter, the day before Christmas, there was a huge snowfall. All the trains stopped. And people wanted to know what's going on. So they started hammering the websites of the train companies. It's really a DDoS attack. And all the websites went down. And that's this one. They were running in Azure. So when the traffic went from 50,000 visitors per day to 5 million, they pushed the button. They got enough capacity to keep it on. So all the services kept running. The customers were, I won't say happy. Nothing worked. But at least they got their information. And then after a few days, they could scale down again. And serving those 5 million visitors a day cost them, I think there was a 180 Danish Kroner. That's like $30. It's nothing. So they could scale up, have the capacity when they needed it, and then scale down again. So the flexibility here is incredible. So in most cases, I believe, actually, the cloud is most secure. The Norwegian government has written a strategy on cloud services. And they actually conclude that they want people to use the cloud because of security. That's interesting. That's new. So how do we protect our data in the cloud? In general, you follow the same rules. Exactly as you were locally. It sounds really easy, and it really is. But we need to consider our data and how to secure it. We have your data, it's a living thing. It's several places. We talk about data at rest. So when it's stored in your database. Then we talk about data in transit. That's when it's moving across the network, either between your servers or on the way to the user. And finally, we talk about data and use. That's when your user is actually viewing your data and using it. All those three places post risks. You need to think of security of all of them. Then we talk about confidentiality. That's how secure are we? What's the risk of someone who shouldn't be able to look at our data? Actually, I'm able to get access to it. And finally, we have data integrity. We're talking about what's the risk of your data not being complete? Have it been changed by someone in transit? So those are the factors we have to look at. Data at use. That's a cute cat, huh? Well, your data is at use. Your user is looking at it. So we're talking about the user interface. It's often the first thing we think of. We need to set up proper access control. You probably want sensitive data, at least two-factor authentication. Ideally, you don't want to do authentication yourself at all. You want to rely on a third party that has proper infrastructure. Here in Norway, we do what is called IDA-Porten. They are a national service that handles personal identities in a secure way. So not only can they give you an identity, they can tell you how sure they are about that identity. And we can trust them to have the proper infrastructure. And then we don't have to handle that complexity. Unless we have to do it better. So proper access control, that's important. You will, of course, set up your firewalls. If you need your application only to be available in a limited network, in the cloud, you can use firewall rules limited to a certain IP range. If that isn't enough, you can set up a virtual private network or even a dedicated line between the cloud and your local network. So there are ways to limit access to your cloud services, even though they are running in a shared environment. Of course, if you want to limit your limited access, then it means that you trust your home network. And here in Norway, all the hospitals, they are running in what is called the health network. It's a network combining all the hospitals and other companies handling medical data. It's actually quite large, a large mean of internet between the hospitals. But the challenge there is that the data is not, or the network, the computers that can access that network also can access the internet. And there is no real governance, who can access the health network? It depends on the various organizations. So is that really a secure network? So why do we trust so much this network and not the one run by professionals? I'm not saying that. Sorry, I'm sorry. I'm not saying that we're not professionals at home. We have best people available. But it's a big network. You can't really control it unless you have proper governance and you know exactly who has access to it. As long as the computers accessing your safe network also access to the internet, you have an open gateway. So what's the difference? So secure applications, set of firewalls, access rules, all that stuff, and follow all the best practices. OVOSP, you know them? They're an organization. The focus is security for applications on the internet, really. They have some really nice sheets, for example, on authentication and access control. I'll post links to them later. If you follow those guidelines, I'm not saying they're easy, but they're easy to read. So they're clear. They're quite good. And here, if you're running on premises or you're running in the cloud, it doesn't really matter too much. You need to secure application after all. In the cloud, you get some security feature that can be nice. In Azure, you have Azure AD. And it makes these things quite simple. But the biggest risk is the user. We are the risk, not the network, not the authentication mechanisms. All research shows that the easiest way to hack a system is to fool the users. If someone calls to a user of your system and says, hi, I'm from the IT department, what's your password? There is a real chance that they will say, oh, it's ABC123. Thanks. And you have access. It's much easier, much quicker than hacking into a system, brute-forcing password protection or whatever. So the challenge usually is the user, not the platform you're running on. So, data in transit. Here we have the fared man in the middle. It's scary. Potentially, someone can look at your data while in transit. So while your data is on the way from your local data center to your user, they can pick it up, maybe even modify it, read it, and post it on. In theory. And there have been some, actually, several attacks on the SSL and the TLS protocols, the encryption that we do over the network. So you need to stay up to date on the latest versions on the protocols and the software you're running. But as long as you encrypt your traffic, use the latest version, use HTTP strict transport. So you're always encrypting your traffic. Then in reality, it's very hard for someone to break into that traffic. Of course, if your user has a computer with a bad root certificate, then you have a problem. I got a Dell. And these computers, they were sold with a root certificate that the password was easily hacked. So it was quite easy to fake the traffic. So, again, the user can be a challenge. But you need to set up your network properly. And then you're quite secure, actually. So, man in the middle attacks, that can affect both confidentiality and integrity. Since not only can they look at the information being transmitted, but they can also change it. The difference here between running locally and in the cloud is that a cloud provider, they have lots and lots and lots of resources on securing the platform. Usually, at least where I work, the guys running a network, they're overloaded with work. They have way too much to do. They're brilliant people, really good. But they don't have enough time. So, securing our network, keeping everything up to date, that's going to be a challenge. Unless the network grows, the challenge is going to grow. In the cloud, when you have one administrator per 15,000 service, whatever, you can spend more money on securing the network. They do penetration testing. They look for problems all the time. And they have large teams doing that. So, again, most probably, the cloud is a more secure platform than your home network. Finally, data at rest. This is when your data is stored at your server. It can be in a database, it can be in a file, whatever. And this, I believe, this is the main challenge when running in the cloud. So, we need to ensure that whoever runs our servers can look at our data. Encryption is a friend here as well. You probably want to encrypt your storage, whatever you do. So in SQL server, you can encrypt the entire database, for instance. In Azure, the blob storage now supports encryption in preview. So, you want to encrypt it. Then you want to make sure that your sensitive data, they encrypt it again. So, not only do you encrypt your database, but you encrypt the data as well. So, layers of security. And, of course, you don't use the same encryption keys. Ideally, you don't even have access to those keys. Keep them away from your data. Further, you probably don't want to mix your sensitive data with your other data. Usually, a part of your data is more sensitive than the rest of it. For instance, in the systems I work on, we usually keep the personal ID, the social security number, in a separate database with separate encryption keys. And we keep it encrypted. But it's physically separated from the other data. So, even if someone gets access to my database, my data, it's in worst case. It's not directly identifiable. So, it's still sensitive data, but it's much less sensitive than if we throw it all in one big pile. So, you can get your data. You got in the cloud, you often get hardware encryption. So, you can get another layer of security. It costs money in Azure, for instance, but well worth it. And finally, you need to control who can access your environment. In the cloud, it's very easy to say, hey, you need to set up a server. I'll give you the pass, give your account access, and you can access everything. It's terribly easy. And you need to limit the access. So, keep a few persons that can administer your data. Because this is your server room in the traditional sense. So, lock it down and encrypt it. Keep your keys safe. There is one problem that I haven't discussed yet. And that's the government. Big brother. And especially the US government. There are cases that the US government, they want insight into data storage in the cloud. And this is a potential huge problem. All the big cloud providers, the American companies, they have to follow American law. And even though the data is, for instance, stored here in Europe, it's still an American company handling it. And this can be a huge problem. Of course, encryption, keep your keys safe somewhere else. And it's just garbage if they should access it. But it depends. Software as a service is a bigger problem here, I think. Also, Microsoft, they have a nice solution to this problem. They're setting up a new data center in Germany. It's Azure. It's got all the features, all that. But they don't own it. I believe it's Deutsche Telecom who owns the data center. So that means that even though it's Azure, Microsoft don't have access to your data. And so it's not following American law anymore. It's following European law. And this is a topic for discussion. But I believe European privacy laws are better, at least easier to have control of than American laws. We can discuss this for days. But let's just say that for now. European law, it's easier to follow. And it's better. So you're laughing. It's a big topic, so it's way outside the scope for today. So the cloud is safe for them running locally. You're almost guaranteed, say, for environment. You're almost guaranteed that the organization running the data center in the cloud has much more resources than your local company. And they probably spend much more resources on securing both the data center, the network, the servers, everything. And what you can do in your local data center. You got more security features, faster updates. And really anything you can do locally, you probably can do in the cloud. So it's the same bit more. Also you get better physical protection. Like my company, we have two data centers. They're physically separated. We've got several networks connecting to that. So it's not that it's going to stop. But should my hometown try to disaster happen there, then we have a problem. In the cloud, the data center is separated. So you can set up, for instance, services to be replicated between Western and Northern Europe. So even if something bad should happen in Ireland, it's still up and running. So the physical security is better. Also the protection of the data center themselves. That's insane. I recommend you look it up. Crazy guys. So, but still, there are reasons not to use the cloud. If you, for instance, have your existing infrastructure, moving to the cloud can be expensive. There are hybrid solutions. I'll get back to them. But also you have the problem that once you move to the cloud, going back, it's not easy. Azure isn't compatible with Amazon in any way other than the most superficial. They offer virtual machines. They offer storage, they offer databases. They're not the same. So it's not easy to move between the environments. So once you've chosen a provider, you're stuck. And this is a real concern. They're afraid of lock-in. And also the cloud is new. It's a new model. It's a new way of hosting your applications. And as I said, you need to trust your provider. And I see it as a natural next step. We started typically with a local data center where your local people were running your servers. Then you found that it's terribly expensive. You started hiring people to run it instead. Pay for what they do. So then you didn't control the people anymore. Then you probably saw that why should I be running these servers? Then you move your servers to a hosting provider. So then you don't control the environment. You have to trust your provider. But you still can probably know who they are. You still know where the data center is. And the cloud is the next step after that. You still have to have a provider that you must trust. But now you don't know where it is. You don't know where the servers are or anything. So what does the law say? This is probably only interesting for the Norwegians here. Because I didn't have time to look up the laws in all the countries in the world. But Norwegian law is quite strict. So I believe as long as the stuff is legal here, it's probably legal in your country as well. And I'm not a lawyer. I'm a programmer. So I'm no expert. But I've been reading and checking out. And you can't store classified military information in the cloud or outside Norway. But apart from that, there really aren't any... The law doesn't stop you. There are some old laws that say, for instance, that an archive can't be moved out of Norway. But as long as you keep a backup in Norway, you're probably good. And you can ask for dispensation for that law. The government, they're actively working on modernizing the law. Because they want us to use this. And for those of you who are interested in this, I got some resources on it. So I can help you to look it up. But we won't dig into it now. So the short summary here is that the law doesn't stop you. You can use the cloud for sensitive data. Really? Yeah, it's true. You can do it. But the same requirements apply as if you're hosting them in Norway or locally. Of course, in your country, there may be restrictions. There may be other laws. So make sure to look up this. Because if you don't check it out, then there's no guarantee. So talk to a lawyer before you move your data into the cloud. So if you're still uncertain, you can use hybrid solutions. And this is good... This is quite nice, actually. You don't have to move everything into the cloud at once. There are several ways to take one step at a time, move gradually. For instance, you can store your data in your local data center while you're running your servers in the cloud. You can set up a VPN so you can access your servers. And there are other technologies. So you can run your servers in the cloud and keep your sensitive data at home. Then you've got partially a cloud solution, partially the old way. And it can be a nice way to start your transition. Also, you have a hybrid cloud options. You can... Microsoft offers the Azure pack so you can actually set up your own cloud, your own Azure in your data center. And you can combine this with the public cloud, the real Azure. So you can set up some services locally, some in the cloud, depending on what you're comfortable with. And you can move them over time. And you can, of course, run your own private cloud if you want the cloud features, but you're not comfortable. But I don't see that as a really nice solution. But the hybrid options, they open up many new doors, I think. So anyone doing this now? We're working on an application. I won't get into the details. We're not done yet. But we're going to handle sensitive medical information and do analysis and reporting on those data in the cloud. We're actually going to use Microsoft Power BI to do analysis. It's going to be great. And I believe we're the first in Norway to do this. But we're not done yet. But I have to tell you guys because I think this is quite cool. We're probably going for both the hybrid and not hybrid solution, depending on we're supporting many customers. So we're probably going to recommend that they use the replicated data into the cloud, move a copy there. But we're also going to support connections to the local data center. And another example, this is from Microsoft. The Dothma of Hitchcock Medical Center in the US. They're gathering medical sensor data from patients. So this is real-time information. They're pushing it up into the cloud, into Azure, where they use machine learning to monitor the data. If there is a problem, they can actually notify the nurse, the doctor, and the patient can get help if they even notice that there is a problem. It's fantastic. And setting up a service like that in the cloud is doable. But setting up in your local data center, that would be a challenge in so many ways. So the cloud is opening many new possibilities here. How do you secure an application in Azure? I just want to go through a few details. I'll give you some references later. Encrypting your storage. SQL servers offer TDE. That's really, they encrypt the entire database. It's just a click of a button and it works. So do that. If you use blob storage in Azure, there is a preview now of encryption as well. So your blobs are encrypted. So they're going to roll it out this autumn sometime. I don't know exactly. But it's in preview now. And also you have the client-side encryption if you need that. So you can encrypt your data, even before it arrives in your service. Microsoft offers Key Vault as a nice way to handle all your secrets, your connection strings, your password, your encryption keys. Key Vault offers a nice API to access those data, only when allowed to do it. So it's quite easy to use. It's secure. And you can even use hardware encryption. So it's safe. And if someone somehow should be able to access your encryption keys, you know. Also, you have Azure Virtual Networks. If you need to connect to your on-premises services or you want to limit your only exposure to your cloud services to your local data center, and Azure AD offers some really nice security features as well. I got some, I'll give you some links and I'll publish my slides later. So watch the NDC Twitter feed. Because you need to read up on this if you want to do it. But there are some really nice features in Azure. If you're not on Azure, Amazon or Google, it's the same, but different. You can do more or less exactly the same things. I don't really, it's been a while since I used Amazon and Google. So I don't know so much about the details. But for instance, Amazon, they offer what is called Amazon Direct Connect. So it allows you to set up a channel to your local data center. So it's like a VPN. And they also offer exactly the same features for encryption of data as Azure does. And also Google, they got Google Cloud Interconnect. It's the same encrypted channel back to our home data center. And they also offer the same standard security features, encryption, all that. So I'm used to using Azure. And so that's what I talk the most about. But Amazon, Google, they offer the same thing, more or less. And I'm good on time as well. So let's summarize. If you know what you're doing, then I believe the cloud is quite safe for sensitive data. There are no technical reasons why you can't do it. You get the same security features. You get a safer data center. So as long as you trust your cloud provider. And they're certified in all levels. So I believe you can trust them. But make sure, talk to them. And as long as you trust them, then there are no technical reasons why you can't use the cloud for sensitive data. According to Norwegian law, you can use the cloud. And there are some old laws that can make things troublesome. You have to ask for dispensation or cheat. But the government is working on those laws as well. Depends on what you're doing. But you want to keep your data in Europe. I believe that's for all EU countries. You want to keep your data in Europe. Because of how the, then you're under European law, and it's much easier. You want to secure services exactly the same way as you do when you're running your local data center. But in the cloud, you can do more if you need to. And from what I know, the only exception here is classified military information. If you're working with that, then don't go for the cloud yet. Boring. So the challenge is the skeptics and convincing them. And I hope what I said today maybe can help. So let's end with some references. I will publish the slides. The first one, it's an analysis of the taxi data. It's a really interesting blog. It's a fantastic story. Also a story about the Danish railways. There are some Norwegian links here to what the government says. The Norwegian data protection authority. And the cheat sheets from OWASP. They have a tremendous amount of resources covering all aspects of security. So visit our site and read. It's a brilliant source. Fantastic. Also on the technical side, there are some references to Microsoft. There are some huge links. But I'm going to publish this. I don't worry about it. So Microsoft, they have some sample apps up on GitHub. There are no applications covering end-to-end all the security features. So you have to mix and match. I recommend them that they build a proper example that covers everything. So I hope they do that soon. And also there's some documentation on Kivo client side encryption and storage encryption. The storage encryption, as I said, is in preview. But it's going to be available quite soon. So I have time for a few questions if you want to. If you want to kick me down from here for what I said, then you're free to do so. And I can see a thing. The light is terribly bright. And if not, I'm here for the rest of the conference. So come on. Let's have a talk if you want to. Yeah. Thank you. Thank you.
In my job I work with very sensitive medical information. The traditional view here is that this sensitivity excludes the cloud as a possible platform. I believe that this is not only wrong but also a very unfortunate limitation. In this talk I want to look at this situation to discuss the background and then to look at what you can do to fix it. In my opinion there is nothing stopping you from handling sensitive data in the cloud, but you have to know what you are doing. The same requirements for data security applies for the cloud as for on premise systems, the main difference is how we perceive the cloud. The focus of my talk will be on Microsoft Azure, but the principles are reusable across all platforms.
10.5446/51827 (DOI)
Okay, then folks, good morning. Thanks for coming. Hope you're enjoying the conference so far. And my name is Matt, and today we're going to talk about.NET without Windows, which of course means it's a.NET Core talk. And we're going to have a look at what it means to build and run and deploy and debug a.NET Core application without having to use Windows without using Visual Studio. And the first thing we need to do really is kind of ask ourselves why, why are we interested in doing this? Why do we want to run.NET without Windows? And of course the answer is of course because all developers love MacBooks and we're all sick and tired of running Windows on a VM. Although of course if you want a slightly less flippant answer, there are probably a couple of things you could say here. Firstly, I think there's a decline of the monoculture, the idea that Microsoft has to be the entire stack top to bottom. We've had more of a sort of a rise of the polyglot programming, so the idea of using different tools, different languages, different operating systems, with the right tools for the right job. We've also seen in recent times the rise of the DevOps culture, more specifically the idea that you can sort of automate your infrastructure. You can, it's got to be repeatable, it's immutable, and you can automate it. And Windows doesn't really fit into this world at the moment. So they're working on that with Nano Server and it's going to be much more scriptable and workable. But right now, the majority of the tooling around this is really focused around the Linux space. And the third reason is the cloud. Running your servers in, on somebody else's hardware is a great idea. It's nice being able to scale up and scale down as appropriate. But when you pay for this and you pay per the size of the files on disk and the size of the memory footprint that you've got, then Windows can be a bit heavy for this. Windows Server is several gigs whereas Linux is several hundred megabyte. When you install Windows Server, you've got more files on disk than you actually going to need. You're not going to use COM, you're not going to use WPF, you're not going to use Notepad or Explorer. And this just kind of bloats then what you've got on your disk and increases any kind of surface area you've got for security attacks and means you've got more reboots and so on. And this is definitely the use case for Nano Server in the future. There's much more stripped down version of Windows which is, again, several hundred megs similar to Linux. But Linux is there now. So we want to sort of really have something within that space which we can reuse our skill sets as.NET developers to, to implement applications. But it's not just.NET Core. We shouldn't, we shouldn't just totally focus on.NET Core. We've been able to do cross-platform for a good while. Now, Mono has been around since 2001, since pretty much the start of the.NET framework itself. It's cross-platform. It runs on Linux, on Mac, on Windows, even on your PlayStation, which is pretty cool. The other interesting thing is that it's not just for server applications. So you can do GUI applications with it. GTK-SHARP is a cross-platform GUI toolkit so you could do desktop applications. The other thing to say about Mono is that it's open source which means that you can contribute to it. But also, anecdotally, really, it means that there isn't a big corporation behind it like Microsoft to put in necessarily the resources that you'd require for such a big project. So anecdotally, there are people, you are more likely to encounter incompatibilities with the.NET framework and other issues which can cause you problems. For example, the current conversation around.NET Core is that it is faster than Mono. So this is a good thing here. It also works both ways as well. So because Mono is open source, it can take code in and because.NET and.NET Core is open source, then they can pull those codes in. So Mono has seen a number of things brought in from the.NET framework. They've replaced the Threadpool implementation with the one from the.NET framework. Just the other week on.NET, Miguel was talking about the number of class libraries that have changed and depending on how you count it, it's between 40 and 60 percent of the base class libraries have been updated with code from Microsoft. They're also looking at bringing the garbage collector in as well. And then we've got Xamarin which is perhaps Mono's biggest sponsor. They provide cross-platform support for the Mac, for iOS, for Android and the thing they add on top of Mono really is some platform API binding so you can do native applications. Everything is statically compiled ahead of time, statically linked, so you don't deploy the runtime. You still got a garbage collector but you don't necessarily have things like the JIT compilation. This has also been recently open source with the acquisition by Microsoft. But again, the interesting thing I want to point out here is Xamarin Forms. It's again, it's about cross-platform desktop applications. And GUI apps and having a native look and feel by abstracting the native widgets, building a XAML tool, frame kit, tool kit or framework, not tool kit, that works with the native widgets and gives you the native look and feel. That gives you kind of like a common subset of functionality but it gives you then an escape hatch if you want to get into more rich and more native functionality. And this is therefore, I asked for Android for Windows Phone and the interesting thing is there's a WPF backend which is nearly finished. So this could be an interesting thing to watch for doing cross-platform desktop applications. And then Unity. Unity is a huge thing as well. The website says they have four and a half million registered developers. If you haven't used it, then if you downloaded a game on your mobile phone, chances are it's powered by Unity. It's a native game engine but the logic behind the games can be powered by.NET, by Mono. And this is very much cross-platform as well. iOS, Android, Windows Phone, Windows, Mac, PlayStation, Xbox, virtual reality things and even Tizen. And anybody know what Tizen is? More than me. Well done. I've got no idea. So that brings us to.NET Core. And I kind of assume all of you have heard of.NET Core but just to kind of quickly recap, it's a new.NET stack from Microsoft. That means it's the full stack, top to bottom, the CLR itself, the runtime, the JIT compiler, the garbage collector and it's the base class libraries as well. System.threading, collections, collections immutable, XML, HTTP and so on. It's not brand new. It was initially forked from the.NET framework so it is based on tried and tested code. And this is now cross-platform. This is a big thing here. It works on Windows. It's also going to work on Nano Server which is a good thing. And also runs on OS X, various flavors of Linux and also FreeBSD. FreeBSD is in brackets here because it's not listed as one of their officially supported platforms but it does run on there. It's open source as well. It's hosted on GitHub. You can get the source, build it, contributes. You can interact with the developers. You can raise issues and so on. It's all good. And the other big thing about it really is that the base class library has been very much refactored. All of the assemblies have been split down into more of a fine-grained set of collections. So, collections API, for example, is a good one. You'd have system.collections and collections immutable as separate assemblies. You've got threading as one assembly, threading tasks another. And the same with things like IO. Zip files will be different to drive info, which is different to file info. All the separate assemblies. And you get it everywhere. Every single assembly is packaged as a new get package. The runtime is also a new get package. This is a big deal with.NET Core. I'm not going to spend too much time on this but this is kind of the way that compatibility is going to work with.NET Core. This is worth looking at in more detail at the docs on GitHub. The idea of the.NET standard platform. This is a virtual platform which other platforms implement. So, it's kind of like a specification really. It's a common subset of APIs that are available on a particular physical platform. And it's version that each version is sort of backwards compatible. The APIs in that subset are additive. So for example, you've got.NET platform standard 1.6 and the platforms that currently work with that are.NET Core 1.0 and the.NET framework 4.6.3. The idea of this is that it kind of replaces the portable class library idea. The portable class libraries, they are not extensible. So if you have a library that targets a portable class library, that specifies the list of all the platforms that it actually supports. And if you add a new platform, it's not supported. So this idea of having this kind of virtual API abstraction specification kind of gets around that. As long as the platform conforms to one of these standards, then it can consume stuff. The idea is that as a package author, you target the lowest net standard possible. And in that way, you then get the widest reach of actual platforms. So for example, if you targeted 1.6, your library would only work on net core 1.0 and.NET framework 4.6.3. But if you targeted 1.3, for example, you can run on.NET framework 4.6 and above. You can also run on Windows 10 applications and also, of course, Mono. And likewise, if you run on 1.1, then you can run on all the platforms that 1.1 supports. Application authors will then target the actual platform. So you'd still target.NET 451 if you run.NET. Or you could do another new one here, which is net core app, which is an application for.NET core. Now I don't think we can talk about.NET core really without talking about some of the recent angst in the community and the early adopters and a bit of its image problem. I think this image here from one of MSDM blogs is a good sort of metaphor for this really. At the bottom layer here, we've got.NET core. That's the runtime, the base class library. It's all nice and neat and lovely and ready and everything. Next layer up is ASP.NET core, which relies on.NET core below. And again, that's nice and neat and ready and everything. And then at the top in this little kind of thing, not quite fitting in and sort of sticking out, we've got Visual Studio and the tooling and everything. And this is kind of the situation we're in. So to briefly recap some of this, up until sort of really kind of one during development, everything's moving along nicely. We've got the cross-platform ports. We've got the refactored base class library. We hit RC1. We've got a go live license. Everything's good. And then after RC1, it's announced that they're going to replatform. That means that the tooling is going to get a complete rewrite. This is a breaking change on top of RC1. It makes sense. Having had a look at the platform, you can see that it's actually really quite focused on ASP.NET development and doesn't necessarily focus on some of the other platforms that.NET core would like to address. And so there's been a rewrite of all the tooling there. But we're then left in a situation where we've had a go live license for RC1. We've had lots of breaking changes for RC2 in the tooling. And right now with the release, we've got the.NET core, the bottom layer. We've got the middle layer of the ASP.NET core. They are our release candidates. Two status. They're going to RTM at the end of June. But our tooling is a preview one. And that's just a bit kind of weird. And then on top of that, we've got changes to the project formats. So the project format file right now is project.json. This is going away, which makes kind of giving a talk about it fun. But the idea again, having a look at what it does, it's very much again focused on the ASP.NET side of things and the sort of Node.js type of working. And it's great for simple applications, simple libraries. But if you want to do something a bit more complicated, then there's kind of not enough scope to work with that and to get that done. And so they could, Microsoft could invest time and effort to increase what project.json does, make it more flexible, more powerful. Or they could turn around to their existing solution to this problem, which is MSBuild. And that's what they're doing. So they're rolling back changes in project.json and they're using MSBuild and CSproj files. That then gives us a little bit of a dilemma. Do we port now and migrate everything to project.json just for it to return back to CSproj files or do we hang on and wait? The tooling is going to help with this. The tooling is going to make this a bit somewhat automatic. But then that's due with Visual Studio Next, which we don't really know what the RTM is for that. And sometime about November is kind of the best we know. So there's kind of a couple of frustrations about this and then there's another thing. So we've kind of, after the RTM, there are plans to greatly expand the API surface. This was already part of the plans. It was already going to happen. The API service and the net standard was going to increase and improve. But it looks like some of the APIs that were removed from.NET Core and the net standard were decided to be cut to sort of improve the developer experience and these are now going to get put back in again. And this then sort of gives us a bit of an interesting dilemma with this here. So if we now have a net standard 2.0 which implements a whole bunch of APIs that were previously cut, we could have.NET Framework 4.5, for example, could implement all of those APIs. But how is it now going to support net standard 2.0? So you're going to have code in net standard 2.0 which can't work on 4.5. So this all looks very much like it's associated with the Xamarin acquisition. The target APIs that are going to be implemented are pretty much the same as Mono's model platform which is Mono's subset of APIs of the.NET Framework that it implements. And really between the lines there, does that mean that.NET Core is now starting to target Xamarin in Unity? There are questions really, does this harm adoption? Should I bother porting right now? If it's going to sort of be easier later? Because if I'm going to go back to CS Prod.file, I don't have to convert my file format. I don't have to work around any API changes to there. So from our point of view JetBrains, we've got Project Rider which is going to be our.NET IDE. We run Resharper at the moment on Mono when we're running a cross platform. We want to put that in.NET Core because we've been told Mono.NET Core is going to be faster. But we've got about 300 projects to convert. So right now we'd have to convert 300 projects to project.json, fix up any API changes, all we could just wait. And waiting looks a whole lot easier. The other thing is, does this change the vision of.NET Core? If we had something which was kind of small, focused on nice layering, on the nice refactoring of the base class libraries, does this kind of change the vision by greatly expanding the APIs like this? And right now, this is kind of early days really. We don't know enough detail to know how this is going to affect things. But don't panic. This is all okay. This isn't too bad. This is the very top layer that we've got going on here. The bottom layer is still good. The.NET Core and the ASP.NET Core is good. And most of this really is a timing and communications issue. If this had happened during beta, this would be fine. We wouldn't bother about this. But it's happened during release candidate. And that kind of makes things a bit tricky. And it's not the great, not the best timing. But we can work around. We can work with it. This is a great quote by Nick Craver saying about the project. The project, it's not developed in the open. It's coded in the open. And this is the thing, the decisions are made behind closed doors. And that's not necessarily a bad thing. You know, for example, the Roslin projects, all of its language design meetings happen behind closed doors. But they're minute and put on GitHub and discussed. So it's all fine. But we all need to kind of adjust to this and get used to this idea that it's still Microsoft's project. These changes can happen and it's okay. But we just need to make sure that the communications around this happen well as well. It's all recent news. Dust will settle. It'll move on and it'll be okay. The changes all make sense as well. So it's actually going to make things better. The new command line tooling is better. It's more extensible. It's better focused on tasks that we wanted to be able to do. The project.json changes are going to be good because we're going to have projects that do need the power of MS build and do need to do extra interesting things like that. And also, the idea of the extended API set, it is going to make porting code easier. And if we can get more code onto the new platform, it's going to be better. So the changes do all make sense. We also have very smart men and women working on this as well. So it's a good project. It's going to come together and it's going to be okay. And the other thing to point out, to reiterate as well, despite any kind of sort of controversy we got at the top, the.NET Core and the SP.NET Core layers, they are definitely solid. They are good quality and they're stable and everything. And everything is kind of genuinely ready for RTM. The other thing to point out, I know it's not terribly easy to read up that, is that Microsoft are very much aware that CS prods files are horrible and project.json files are much nicer and simpler and easier to use. And they're going to be aiming to work towards this. So we've got most of the stuff in the CS prods file on the left there. It's there for Visual Studio. And they also control Visual Studio. And they're rewriting the project system for Visual Studio next for C-Sharp. And they have already got an issue which is tracking this and trying to reduce the amount of cruft that is in the file there. I don't think it'll ever get to the simplicity of project.json, but it can certainly get it close. Right. That's enough about that. Let's actually start talking about.NET Core on outside of Windows then. Okay. So the first thing you need to do is get.NET Core actually installed. You can get it from this URL which is clearly our URL which is designed to be written down and not set out loud. Really, www.dot.dot.net is too much. Although to be fair it is just shortened to.dot.net which is clearly better. And once you've got that then it's time to create a project. So what we're going to do now is switch to the command line and obviously everything is very much command line driven. You're all familiar with that if you're not working on Windows. And the first thing we need to do is run the.NET command line tools. And we can do.NET new. And we've now got a simple sample application built in a file here. So now the.NET tools, it works kind of like Git where you've got a driver application,.NET, and then you've got a bunch of subcommands. And those subcommands will do interesting stuff for you. You've got new, you've got restore which will do things like something like new get restore, build, publish, run, test and so on. We'll have a look at some of these as we go through. Okay, so we've got various files. We'll have a quick look at those. Program.cs is a simple hello world application. The interesting thing to point out is that we've got a static void made here. So this is going to be a console application. And the other thing, next thing to look at is project.json. This is our file format. And this is kind of fairly simple. There's not much going on here. We've got a version number of the thing we're going to build. And then the other two elements we've got are frameworks and dependencies. The frameworks tells us what framework we are targeting. And here we're saying I'm targeting net core app 1.0. This line here, the imports is a bit of a backwards compatibility thing. It's saying that I'm going to build a net core app 1.0, but I can also consume DNX core 50 stuff. This is kind of one of the names that got used in NuGet packages as.net core was evolving. And we're saying that until everything has kind of been updated and moved on, I can still consume DNX core 50 libraries. The other dependency we've got is on a package called Microsoft.net core app. So all the items in the dependencies list here are packages. They're NuGet packages and we can just consume those. This one is the standard package for net core applications. And we get to specify the version, RC2 version. And we've also got a type of platform. And I'll come back to that later. So the next thing we need to do is.net restore. This is a NuGet restore and it will go off and it will download absolutely everything. It will download the Microsoft net core app package and all of its dependencies. And fortunately, I've done this earlier and it's all locally cached because there is a lot of them. So all the packages are cached locally and sort of globally per user. And so you've got a nice easy way of pulling and restoring without necessarily having all the pain of having to download everything all the time. And there's a whole bunch of different packages going on in there. So we can see there's some Microsoft ones. We've also got the system type ones there. These are the nicely factored base class library. And we've also got the run times as well. So the actual CLR itself is pulled down as NuGet packages. And once we've got a, once we've restored and we've got our NuGet packages all downloaded, we end up with a project.loc.json file, which is a massive JSON file which we don't need to worry about the contents of. But basically, it lists all of our transitive dependencies from our packages. So rather than having to walk all the packages again every time it needs that information, it can just pull it straight from the, the the, the loc.json file. Okay, next we can do.NET build. This will compile the app for us. And once we're done, we've got a bin and an object folder. We've got a debug folder underneath the bin. And we've also got net core app 1.0 underneath that. We can target multiple frameworks. And so everything is, is handled, it is put under a directory under that. And in that folder, we've got several files. We've got a couple of JSON files which are all about the runtime information. But the important one there is we've got simple.dll and a pdb file for debugging as well. Now interestingly, we have a dll file even though it's an executable. This is because even if it was a.exe, we wouldn't be able to run it because I'm on a Mac. You can't run exes on that. So to run it, we've got two, two options really. We can do.NET run and that will sort of run the project. It'll make sure everything is compiled and up to date and it'll run it. Or we can get to the dll itself and we can do.NET simple.dll and it'll just load that dll and execute it and run it. Okay, so that's running a very simple project. Now that's good, but that's not particularly that much fun. So it is not that interesting. So we could really kind of create something a bit more interesting. And so how do we create an ASP.NET core application? We don't have Visual Studio. So we can't use Visual Studio templates..NET new has a type parameter to it. And so it looks like you could perhaps put in something interesting here and generate something with a bit more, a bit more fun going on there. But it's only implemented for console. So you can do.NET new type equals console. That's all you've got. There are a couple of new get packages which can extend the.NET command line tooling. There is one for web code generation, but it doesn't seem to do anything yet. It's not documented yet. So maybe this is how it'll work in the future. I don't know. There are existing packages which extend this for entity framework and publishing to IIS. So this could clearly be the way to do it. So what we're doing instead is use Yo-man. Yo-man is a node-based scaffolding app which is extensible and it's for building, for creating web applications to start with really. The OmniSharp team have created a generator for ASP.NET core and we can use this centered so to get ourselves started. We have the steps here to actually get it started. You need node installed. You need Yo-man installed. You need Bower installed. Bower is a front-end package management for CS and JavaScript files. And then you need the generator installed. And then you need to run Yo-man with Yo-ASP.NET which is clearly I'm too old to say and get away with any kind of cool thing. So, but it's easy to run. We will do Yo-ASP.NET. And now we get a choice of options of things we can build here. So we can build an empty web application. This is probably the one you'd pick to get started here because it will just give you the framework to get going. You can have a console app. There are a couple of demo web applications which are going to be useful. You can do like a web API application, Nancy, class library and a unit test application as well. So we'll just create a simple web application and we will just use the default values there just to sort of get us up and going. So it creates a bunch of files for us. Then it runs MPM install and Bower install to make sure we've got all of our client-side JavaScript type files and we're done. And if we go into web application basic, we've got a lot more files here. So this is a much more interesting project to get started with. And if we have a look at the project.json, let's just get to the top of the file, you can see that there's a bit more going on here. The dependencies is a bit bigger. We've got more things going on. We've got a Microsoft.NET Core app here again. So we're still a.NET Core application, but we've got extra things on top of that. We've got ASP.NET Core packages and various extensions as well. We've also got a new element here for tools, which means that we can extend the.NET tooling with various packages as well. So if we go to.NET, now we can do.NET Razer tooling and that gives us command line tooling to work with Razer files there. Other sort of interesting things we've got in here, we can specify runtime options to use the server version of the garbage collector rather than the sort of desktop version. And we can run some scripts when we're publishing as well. Okay, so we need to do.NET restore again. And again, this will sort of trundle through. I've got everything cached locally, so it's all nice and quick. It's created our lock file so that we've got something to build against. And we can do.NET build and it'll go off and compile it. And then we can do.NET run again. And now we say that we're running on local host 5000. Well, clear the screen right there. Let's just open the web browser. And as that loads up, we're hitting the view, we're compiling the views and in fact, if we go back to that, we can see that there's logging going on on the screen as we sort of click around and move everything and so there. So we can run a.NET, an ASP.NET core application on the Mac. So let's have a look at the code and that's probably a good time to switch over and have a look at editors. What tools can we use when we're not on Windows to edit and work with our projects and applications and everything? So I'd like to introduce you to VIMP. So this handy chart here, I'm just kidding. So we've got several different options we can do. VIMP is clearly a very capable editor and we won't go into that. We've got several options here. We've got a couple of IDEs that we can use and a couple of editors that we can use as well. JetBrainsRider.NET IDE, which is using Resharp in the back end, that can run across platform and it's good. Xamarin Studio has recently just got a plug-in which allows it to open our C2 applications as well and so you can use that as an ID to work with your code. And then there's Visual Studio Code and Atom as rich text editors which with the OmniSharp plug-in gives you editing for and code completion for C-Sharp and various projects with that. Okay, so let's just have a look at that. So let's have a quick look. We'll start with looking at things in Rider. So Rider gives you the sort of ID thing that we've got here. So if you want to do templates, you can have templates here to create new projects and everything. We can have something which creates you a dotnet called class library or a desktop style class library and so on. So we can get our features from, you know, the templates that we'd like from, that we're familiar with Visual Studio or we can just sort of open an existing application. So we'll open our web application basic here. I cheated a little bit there. I've run a little script here. The IDEs, both Rider and Xamarin Studio, they don't really work with the project.json as a file format. They still require the same thing that Visual Studio requires which is a solution file and a dot exproj file. And so I've just cheated and created the exproj file there. Okay, let's just close those windows. So, you know, we've got an ID, all the kind of things we'd expect from an ID. We can see our references. We can see that our packages and we can sort of drill down into them and see then what the dependencies are and then what they resolve to with various assemblies and so on. We can navigate to the program and now we can see actually looking at the code that this is all a bit different to what we'd expect from a web application. So, for example, the big difference here really is that we're a web application but we have a public static void main. So we are still a console application even though we're a web app. What we do is we create a web host builder and basically we set up a bunch of configuration for hosting a web app and then we run it. There's the short thing. There's a couple of interesting things. We do use kestrel. By default we've got to sort of use it IS integration even though we're not on Windows and it variously starts everything up. Okay, so while we're in the editor here, we've got all the sort of usual kinds of things we can do. Sort of I'll tend to clean things up. Navigate around. Let's go to a HTML file. And we've got all the usual kind of editor features. Where are we? Things like co-completion. We can do things like co-completion with the tag helpers. So ASP.net core includes the idea of a tag helper which is a HTML tag which runs server side which runs as part of the CSH HTML processing. And while it can look like a real tag, it's got additional processing on the top. So for example, we can have a link to an ASP action here and then we can sort of provide those as code completion and sort of navigate around and move around to all those kinds of things there. And we can do a similar sort of thing in code. So I've got visual studio code here. This works in a different way. It doesn't work with project files and solution files. It works with folders. And so what we can do here is open a folder. And there's my web application. Basic. If I open this up, code sort of restarts. And it takes a moment and then it notices that actually I can work with this as a project as a folder and ask you if you want to create some files and it creates this VS code file with some config in there which helps. And again, we get sort of similar sort of functionality that we are used to with an ID. We've got, you know, tool tips and auto completion. Let's just put this in because I'll need this later. And so if we hover over everything, we're good there. We can, we've got navigation. We've got squiggly's and we can sort of fix that. We can have our remove unnecessary usings and so on. So we've got ways of editing and doing rich editing and IDE type functionality then on different different platforms. Let's just go here. We can also set breakpoints. We can do debugging as well. So we can switch the debug window. We can put a breakpoint in our code. We just press go. It'll make sure everything's up to date and it won't be because I've just changed things. It'll rebuild it and then it'll run and start debugging. Okay. So let's just start the browser again and it'll just hit the first view. It'll compile everything and put the breakpoint in about. So if we click about now, bang, we've hit our breakpoints. And we've got variables on the side here. We can sort of drill into that and we can see the values that are set there. We can highlight and right click and evaluate and it's down in the debug console down here. We can also set data. And then if we sort of run and carry on, then it comes through and everything. So we can do debugging as well. We've got editing. We've got projects. We've got debugging. We've got refactoring. We can do all these kind of things without having to use Visual Studio. Okay. So what's next? Testing. I'll switch straight back to the console. We can use Yoasp.net again for this one and we can create a unit test project. Again, I'll just take the defaults and we'll do a quick restore. We've got, so XUnit is the default, sort of, well, it's kind of the de facto standard for running tests in.net core. It's basically the first framework that got ported over. There are also frameworks that support MS test and NUnit, I believe, has a framework. I think it's NUnit Lite that they use. And you can use whichever one you want there. And if we have a quick look at the project.json here, we'll see that this is handled nice and easily by taking on a dependency on.net test XUnits. This is adding the test runner into the tooling and we've got XUnits, which is our dependency for actually using the XUnit API. We've also saying that the test runner is XUnits and that sort of enables it within the tooling. So we can do.net run and we can see that it will immediately fall over because we're not a console application. If we have a look at the code, we can see this is just a class. So we're just building a class library here. We're not building a console app like we did with our simple demo with the web application. So we've got a simple test here. We've got two tests, one passes, one fails. And to run those, we just run use.net test. And that will make sure everything is compiled and up to date. And it will run it. We can see that XUnits does discovery and it actually runs the tests. It prints out anything that fails and then we get a nice little sort of summary at the end there. And so we can do testing as well. So we can do, we can create a project, we can edit it, we can do refactoring and building and debugging. We can test it. The next thing we want to do is hosting and hosting. So how does this work in the world where we don't have IIS? How do we do hosting? Sorry, hosting. And everything is a console app as well. So if everything is a console app, how does it kind of, how does this all work? Well, we can have seen it with the web application basic there. It's a console app. We configure everything all up and then we run it. And this basically means that we are self-hosting. We have an in-process web server and this is Kestrel. There was a great talk yesterday, Damian Edwards and David Fowler talking about the internals of Kestrel. They've done crazy things to optimize it. It is very, very fast. It's all based on LibuV, like Node is. And it's all sort of very much sort of at a sink of weight and efficient and they've reduced all the allocations they possibly can and it is just fast. So it's a great stat. 2300% more requests than an equivalent benchmark on ASP.NET 4.6. That's pretty fast. So that will be good. But don't expose it to the internet. The recommendation here is that you do not have this exposed to the internet. It's while they have, they've made it as secure as they can. It's not battle tested. It's not battle hardened or anything. It's not designed to be exposed to the internet. Also, you're self-hosting. So it's a different thing. So what do we do now? We kind of, we want to deploy it. We want to get it up and running to be able to host it properly. We need to do packaging. And this is.NET Publish. So we're back to the command line again. Let's go back to our simple Hello World application. And we'll package this up ready to go. It's all nice and easy. We just do.NET Publish. That'll compile it and make sure everything's good. And if we go into the bin debug.netcore app 1.0, publish folder, we're going to see these are all the files that we need to deploy. And here it looks just like what we had before, just in the debug folder. There's our simple.dll, the pdb file and a couple of runtime JSON files. In fact, if we, let's just, let's just go into that folder. We can just do.NET, simple.dll and run it and it's fine. Now, one of the premises of.NET Core was that you could do application local installs of the.NET framework. And clearly here there are no dependencies going on here. The.NET framework isn't being packaged up as part of all of this. So what we've created here is what Microsoft are calling a portable application. This means it is portable to anywhere that already has.NET Core installed. And so it can be reused anywhere. It can.NET Core can run it and sort of resolve everything and work with it. And the thing which works, which drives this is this, the type equals platform property of the dependency here. So what we're telling, what we're doing here is we're saying that the Microsoft.NET Core app dependency is a platform dependency. It's already there. It's installed and handled by the platform. And when.NET publishes packaging everything up, it sees this and it knows not to pull all of that in there. We're telling it that it's going to be on the target platform when we get there. We can build a standalone application. And the standalone application will have all of its dependencies in and we do that by getting rid of the type equals platform. Now, I'm just going to, again, cheat and copy this in because there's an extra bit you need to add, which I will fat finger if I'm going to do it live. So we've got rid of the type equals platform from Microsoft.NET Core app. And then down the bottom, we have to add in a new element, the run times element. Because we're saying we're going to run on a platform which doesn't have.NET installed, there's no smart there to figure out what platform the.NET actually supports here, what run time it is. So we have to tell the packaging.NET publish what run time we're targeting. And so I'm going to run on my back here so I want to target the OS 10 run time. And I've changed the project.json so I have to do another.NET restore. And if I do.NET publish now, it should build us a standalone application. I'm going to also add the minus C flag, which means I'm actually going to build in release mode now. And so we've got a release mode application. And if we have it going to bin, we've now got debug and release. And I can go into.NET core app 1.0. I don't have the publish now, I've got an extra folder which is my OS 10 folder. So it's saying that this is now not only framework specific but run time specific. And I've now got a publish folder under that. And if I list that, I've got a ton of files. And this is my full dependency set now to do a full standalone application. And now to run this, I can't use.NET and the DLL name. But fortunately, the.NET publish has given us an executable here. So I can just run simple from the current directory and it'll do my hello world. And so we can package everything up. We can package up as a portable application or as a standalone application. Right. So let's have a look at that, how that works then with sort of a web application and how that sort of fits together. So we can just do the same thing. We can just do.NET publish, minus C release. And that'll go off and build the application in release mode. It'll run a few more steps because the project.json had some pre-publish scripts which would run some grunt and gulp tasks for our CSS and JavaScript. And then we're done. So if we go to bin, we've got a debug and release again. So we're going to release. And we can go to publish. And if we list that, we've got a ton of files. But this project was set up to be a portable app. So it wasn't saying including the net core app stuff, that is still set to be a platform dependency. But we've still got a lot of files here because these are the dependencies on top of the platform. So if we just have a quick look at the project.json. Have a look at my dependencies here. Microsoft net core app is still a platform dependency. But we've also got a whole bunch of other dependencies on ASP net core and various extensions and so on. These all need to be distributed when we're packaging things up as well. And that's what these extra files are in here. And we've also got then the main application. This is our program static main. And we can run it by doing.notes. We have application basic.dll. No. Because I forgot to stop that and the port is in use. Okay. And we're listening on port 5000 again. Right. So that's all good for packaging. We can package our application up. We know now how to deploy it because we just need to basically take all the files in that folder and put it on a server somewhere and run that. If we're doing a standalone app, we can just run the executable that.net package,.net publish creates. If we're running a portable application, we need.net installed on there already and we can just do.net run, well.net and the dll name. But that's no fun because we want to use containers because everybody wants to use containers. So if you've not used containers before, the idea of this is that it's a packaging format for creating up an application and all of its dependencies and running it in isolation. The key thing is that it's not a virtual machine. What it is is an isolated process group that runs on a shared Linux kernel. So the idea is you've got a Linux kernel at the bottom here and then you can run a container which kind of sits on top. And you can run multiple containers on the same Linux kernel and they are isolated from each other. So when you have one kernel, one container running, it thinks it's the only thing running on the Linux kernel. It thinks it owns that kernel. And the important thing to point out is that right now, containers are all about Linux. It is a Linux technology. Microsoft have got Windows containers coming in the next version. In fact, I think they've got Hyper-V containers in the next, in the current preview builds of Windows 10. But right now, when we're talking about containers, we generally mean Linux containers. And this next bit, I wish they put this on the website, it took me ages to kind of figure this one out. But if you are running a container on Windows or Mac, if you're trying to run Docker or any other kind of container on Windows or Mac, you need to have a virtual machine which runs this Linux container, sorry, this Linux kernel. It's all about Linux is what we're doing here. And what you need them to put things into a container is that you need to create an image to run in that. Okay. And that's all nice and easy. So what we want to do is have an image which basically contains all of these files. So what we'll do is we'll grab a Docker file and pull it in. And if we have a look at the Docker file, it's actually really quite straightforward. We've only got a few steps here. We're saying the first line is from command and we're saying, I want to build an image based on this existing image. And the image is the Microsoft dot net image, which contains which is packaged by Microsoft. It's a version of Debian, I think it is, which has dot net already installed on it for us. And so we can build on top of this, we have dot net installed and we can put our portable app on there and run it. So the next line is to just copy the current directory, which is our portable app files and put that into the slash app folder. We changed our directory to slash app. We use the expose command to tell Docker that we're going to be listening on port 5000. And then we run the application. We don't run dot net and the DLL name itself. So it's all nice and easy. There's only a few steps and it's all straightforward. And now what we need to do is build, let's call it demo in the current folder. And that will build a new image based on the Microsoft Docker image that we've got. So if we list our, if we list our images now, we can see, let's just pull that out a bit. We have two images now. I've got the Microsoft dot net image, which I downloaded previously. If it wasn't there already, Docker build would have downloaded it for me. And then we have my demo image, which we just created. And now I can run it. And very importantly, I need to tell Docker to expose port 5000, that I want to use port 5000. I'm going to run it interactively so I can do command and control C and finish it. And there it is. It's running. Oh, yeah. I should point out that I have... Don't start Windows 10. That would be bad. There we go. So I have my Docker instance running. I have my Linux virtual machine here. That is my shared kernel, which is all the containers are going to run in. So that's already running. And let's just open up the browser. Local host 5000 and run it and it fails. And this is again because we're running in a virtual machine. Our Linux kernel is running in a virtual machine which has its own IP address and which is separate to local host. So what I need to do is I need to get the IP address of my docket. Not sure you can see that. So I can ask Docker what the IP address of the shared Linux kernel is and it will give me here. And I can point the browser at that. Let's just do that again. And the browser will now talk to the shared Linux kernel, a port 5000 which is then being forwarded to the container and I can actually display that. So now we are hosting our application and running it in a Docker container. But we're doing that with Kestrel with our self-hosted web server which we said don't expose that to the internet. It has been battle hardened. You want to put something in front of that. What we want to do now is we want to put a reverse proxy in front of it. This is a bit of kit which will actually be the interface to the internet and is something which is designed to do this job rather than Kestrel which is sort of designed to do self-hosting and be as fast as possible. So the idea of a reverse proxy is you're all most likely very much already familiar with a forward proxy if you've ever worked in a corporate environment and you've had to configure your browser to talk to a proxy to get your web servers, web pages and that's a forward proxy. Your browser knows about the proxy, it talks to the proxy and says go and get me google.com, go and get me another site. Reverse proxy kind of works the opposite direction. The client thinks it's talking to the end server but the reverse proxy is then talking internally to an actual another server. So it allows servers to be contacted by any client rather than any client talking to a client talking to any server. The easiest way to think of this is as a load balancer. A load balancer is a perfect example of reverse proxy. So we're just going to sort of step back out of this a minute. How would this work in the IIS world, in the Windows world? Because we're all used to how ASP.NET's apps are hosted in IIS but for this idea where we've got executables and reverse proxies and everything, how does this work in IIS? And the answer is to use the HTTP platform handler. This acts as a reverse proxy. It does process management. So it would start up the executable, the.NET DLL. And this is how it would work for things like Ruby applications as well where they are run essentially as an executable, as an out of process executable. The module is responsible for forwarding the request from these external processes, these child processes and it's kind of responsible for also telling the child process where to listen to. When we're on sort of Linux and everything, there are sort of several options here. You've got things like Nginx, HAProxy, HDPD and this is where you want to speak to your certified DevOps engineer and not listen to me and you get the best solution for the job here. These are reverse proxies and load balancers. HAProxy is a big one used by GitHub, Stack Overflow, Twitter. Nginx is also huge and used all over the world as web but it's also provides extra functionality like web servers and HTTP caches and so on. And these are very easy to do. So I've got, hang on, I can't see what I'm doing down there. Right, so I've got a config file for HAProxy here and there's a bunch of sort of defaults at the top. The interesting thing is down here and what we need to do is we kind of listen on port 5000 and then we can forward it on to a server which I've called Docker one. That's my IP address and port 5000 there. So I can just do HAProxy which I've got installed on my Mac and I can pass in the config file. If I swap back to the browser and go to local host, if I refresh that, I've now got my reverse proxy in place. So I'm talking to local host which is my reverse proxy which then forwards the call into my Docker container and if I had multiple instances there, I could use that as sort of load balancing and sort of round robin or whatever technique I want to navigate around them. Okay, so now we can, we've been able to create, that's basically it. We can create an application, we can edit it, we can debug it, we can refactor it, we can use IDEs, we can use text editors, we can package it up, we can publish it, we can deploy it, we can put a reverse proxy in it, we can host it. So we can do everything that we need to do with an application outside of Linux, sorry, outside of Windows and on Linux. So that just leaves me really basically to say thanks and I'm just glad I've managed to get to the end of.NET Core Talk without anything changing. So it's all good. So we've got a few minutes if anyone has any questions, please shout out. If not, come and grab me later and thank you very much. Yes, question. Why is it running in Docker where you can't debug? While we're running in Docker, you can't debug. You can do debugging but you have to do a couple of steps to actually get that working. I'm not going to be able to show you with the time we've got but yes, you can do remote debugging as well. Hello.
The Microsoft stack has changed, and suddenly, it’s not just about Windows any more. Thanks to .NET Core, we can now host .NET applications on Windows, Linux and even the Mac! So how does it work? What does it look like? And why would I want to do it? Let’s take a .NET Core app through its lifecycle, and see how to create it, and what tools we need to test and debug. We’ll also see how we host a web app without IIS, and how to deploy in a world of Docker and containers.
10.5446/51815 (DOI)
Good morning. I'm so happy to see so many of you here this early in the morning. Thanks for joining me. So, as developers, we tend to go through different stages. My first stage was when I was about seven or eight years old. I got my first computer and I remember I sat down and wrote 10, print Jimmy, 20, go to 10. That was my code. I got my computer to do stuff for me. This is actually my first app. And if anyone wants a source code, I will be happy to share it. This is actually the moment when I realized that I wanted to be a developer. This is what I wanted to do with my life. The next stage was when I got a calculator and a mobile phone. There's a very special feeling when you're writing something on your computer and then you deploy it to something smaller that you can bring with you in your pocket. I could actually sit down and write code on my calculated during class. Mind blowing for me. The third stage was when I was introduced to circuit boards, Raspberry Pi, Netduino, Arduino and stuff like that. I remember I went into the living room and I gel to my wife, Jessica, Jessica, you got to come and see this. The LED is blinking. Look, the LED, it's blinking. On, off, on, see, it's blinking. And she just went into the room, yeah, what about it? You don't understand. The LED here on the circuit board is blinking. I made that happen. I think it was a bigger moment for me than for her. But I was so happy that it was blinking. Then the next stage was when I realized that I can control other devices. Devices that I actually don't have any code control over. I can get measurements from sensors. I can turn off and on lights. And this is what we're going to talk about today. Bluetooth, how we can control devices through Bluetooth. But why should we care about this? Well, first of all, it's really, really fun. And the second reason is this number. Three billion Bluetooth devices were manufactured last year. Of course we want to be here as app developers. Make apps for this customer base. So what we're going to talk about today is how does Bluetooth low energy work? How can we figure out a Bluetooth protocol? And then we're going to go from Bluetooth low energy all the way to IoT. Let's see if that works. So my name is Jim Engstrom and I work as an ASP.NET developer. I spend my spare time doing Windows and some HoloLens. And together with my wife, I run a user group called coding off the work and a podcast for the same name. I'm also a Windows development MVP. But enough about me. Let's dig deeper into this fun stuff. So Bluetooth low energy has many names. It's called Bluetooth smart. It's called, see, there we go. It's called Bluetooth LE, Bluetooth low energy, BLE. And sometimes, wrongly, I should say, Bluetooth 4.0. It's part of the Bluetooth 4.0 standards. But it's not the whole thing. Bluetooth low energy was actually introduced back in 2006 by Nokia under the name Vibri or Vibri, however you want to pronounce it. And then 2010, it became part of the Bluetooth core 4.0 standard. The nice thing about Bluetooth LE is that you can actually ask the device, what can you do? What services are you implementing? That's perfect for developers. It's easy to reverse engineer to figure out the protocols. So a BLE device can tell you what services, what service it has. A BLE device always has one service or many. Must have one. The service is identified by GWT. So there's an organization called Bluetooth SIG, special interest group that has named a couple of these services. They have documented them and they have said that to be able to use this GWT, this service needs to look a particular way. There are also, of course, services that are not defined by Bluetooth SIG and we will go into those later on. One service has one characteristic or more. Must have one. You can think of them like methods or events. Because every characteristic has a way to access them. You can read, you can write, you can indicate and you can notify. And the last two of them are, you can see them as events. So those are, your device is going to send those values back. So this is the documentation from the Bluetooth SIG, the special interest group, for the battery service. So their documentation says that to be implement a battery service, you need to have a characteristic called battery level. It must have read access. It wouldn't be much of a use if you couldn't read the battery level. And then it may have notify. Doesn't have to, but it could. So let's look at this service in another way. So we have the BLE device. It has a battery service. It has a battery level characteristic and it must have read and it may have notify. The really cool part of this is that you can actually build generic apps. You can say that my app wants to connect to any device that implements the battery service. So you don't have to write specific app for a specific device. So now when we know how to communicate or how they work, how do we talk to the device? Well, the first step is to pair the device. You can do that from within Windows and you can do it programmatically. They're actually working on, now I'm talking Windows 10 specifically, they're actually working on pairless Bluetooth as well, but they're not there yet. So you just search for Bluetooth settings, you click your device and you click pair. Sometimes you get this window. Usually it's 0, 0, 0, 0, 1, 2, 3, 4, something like that or you can probably find it in the documentation for the device. Or you can do as I did. I just try 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 2 until you got it. Luckily it's what 0, 0, 0, 0, 7, so you don't have to do that for long. So if you want to do this programmatically, you set up a device watcher. You can request properties that you want to have access to. So in my case, I want a device address and I want to know if it's connected. Then you create a watcher and you supply it with a filter. That's those three dots. We will look at the filter on the next slide. And you supply it with the request properties. What properties do I want from it and what kind of endpoint am I looking for? Then you just add the added and updated event handlers and just start to watch it. And the filter looks something like this. You can find this at MSDN or something like that. So you don't have to write it down or anything. Then we have the added event. With the event handler. In my case, I'm checking the name. I don't have to check the name since I'm checking for services. But in some cases, they reuse the services. They reuse the grid. So it could be a good thing to actually check the name if it's just a particular set of device you want to use. And then if it is the right one, you just pair it. You can also use the pairing.custom.pairersync if you want to do some special pairing code. So I found this scale on Net-on-Net. This is an Anderson scale, but it's actually SenSun who makes them. So I've written a small application that lists all the services, all their characteristics, all the ways that they can be accessed because the device knows these things and are happy to share them with you. I also created a collection or a database that converts these services to their Bluetooth SIG real names, if you want. So if you look at the SenSun scale, it contains generic access. This service will contain things like name and device address. Then we have the generic attribute. This is the service that actually makes it possible to ask the device what can you do. Then we have device information. It contains manufacturer, versions, name of the device, name and stuff like that. Then we have another one, not defined by the Bluetooth SIG. Interesting. So the one, I know that generic access, generic attribute and device information won't give me a weight value because these are defined by the Bluetooth SIG. They don't have a get weight. So the one named 0000FFBs, I'm sorry, I'm not going to let you suffer through that. So the last one is the one I want. So looking at our master's service, it has two characteristics. They both have a way to access them. In this case, I can write to it and I can get a notification from both of them. So I wrote this very simple application that listened to those notifications. And from one of the services, I actually got a reply. So probably that one. So now the fun part starts, the detective work. So I get 10 bytes back. It looks like this. So I started off with a baseline, zero grams, not putting anything on the scale. So I'm guessing that I'm actually looking for bytes that are zero. Those ones. Then I try to put an energy drink on my scale. Just happened to be next to me. It weighed 235 grams. Now, this value changed. I actually have 235 on byte 5. So I'm guessing that's the weight value. So now we're set the scale. Remove the energy drink. And I got minus 235. Now bytes 7 and 9 changed. But byte 5 is still 235. So I'm guessing 7 or 9 is a sign byte. Then I tried with 120 grams. Suddenly byte 4 changed. So after testing different weights, I managed to figure out that bytes I wanted to talk to or read from was 4, 5, and 7. And the formula ended up being like this. So if I take the byte 4 times 256 plus byte 5, that's going to give me the weight. And then I can use byte 7 to figure out what sign it is. So 0 for positive minus is 1 for negative. So my first demo is going to be a legend. Wait for it. Dairy. I'm sorry, I really like puns. So let's take a look at the code. So let's start with, I've actually written a couple of helper methods. So the first one is set up notify a sync. To get notifications, I have to tell my Bluetooth characteristic that, hey, I want notifications from you. So what I do is the first thing is that I get the service with this particular grid. Then I get the characteristic with a particular grid. And then I write client characteristic configuration, the script or a sync, the longest method name ever. And I say that I'm interested in notifications. And this method is going to return the characteristics to me when it's done. When I also have a read value, the same thing here. I get the service, I get the characteristic, I read the value, and I say I want it uncached. Because sometimes the device will cache it for me. So that's good if you want to have low battery consumption. But in this case, I want the latest value. When everything goes well, I get the data. I feel I get an array, byte rate. And in my case, I have made this method generic. So I can actually say that I want to string back. But the Bluetooth device is always going to return bytes to me. In this case, I'm just converted. I'm guessing it's UTF-8. That's not a given. And then I just return the bytes. And then we have write value. Same thing here. Get the service, get the characteristic. And here I'm actually asking, okay, what kind of way can I write to you? Can I write with the response? Sorry, without response? Or can I write with the response? And depending on what it can do, I just implement that. So I just write value async. Again, bytes, always bytes. So in our application, I define the service ID. This is our mystery service. Then we have the characteristic, characteristic squid. Then I create a device selector from squid. So this is going to give me all the devices that implement this service. I get a collection and I just loop through them. I get the Bluetooth LE device. And, well, here I check, does the name contain sense on? Again, I don't have to do this if I know that all the devices that implement this particular service going to use the same characteristics. But just to be safe. Then if I get a device, I run the method set up notify async. So I say to the service, I want notifications. I get the characteristics back. And then I just listen to the value change event. In the value change event, I get an array back. An array of bytes. So I send it into my convert to wait method and just set the property I have on my page to the new wait. And the wait convert to wait method is doing, well, it's going to check whether or not byte 7 is set or not. And it's going to take the sign times byte number 4, byte shift, shifted, and then byte 5. So let's see if this works. I have an energy drink, put it on the scale, and the value changes. Pretty simple, huh? I mean, this is not a lot of lines of code to do this. So let's go back to the presentation. So while we is a company who makes robots, toy robots, and they release a whole bunch of robots. This is the one to your left is the MIP. It's a Bluetooth robot. That one is the MIPO sour. I actually written an SDK to talk to these two. Actually, the one for MIP is out already as open source. I'll share the link later on. And the MIPO sour is coming. They also took a lot of old toys. These robots were controlled by remote control. But they upgraded them. So now they can be controlled by Bluetooth. So let's talk a little bit about the Azure IoT Hub. Sorry, Microsoft has released the Azure IoT Hub, a way to communicate with IoT devices. So what I could have done here is that I could have shown how to upload sensor data to the cloud and do stuff with it. That's no fun. I want to do something more fun. So we're going to try to control one of these robots through the Azure IoT Hub. So the first thing we need to do is actually set up the IoT Hub. I've done that in advance, so it's a little bit quicker to do this with slides. So the first step is to click new. So this is the Azure portal. The next step is to click Internet of Things and then Azure IoT Hub. So let's name this NDC. I have chosen to use the free price and scale tier, and I've chosen location in Northern Europe. Just create and then you wait a couple of minutes. Then you will see this screen, and your IoT Hub is ready for use. The free tier actually gives you 8,000 messages that you can send through the IoT Hub for free. That's awesome. The next step is to install the device explorer. It can be found on this URL. You can download a source for it, or you can download a full installable. It looks something like this. So to be able to use the device explorer, you need a connection string. So let's go back to the Azure portal. Click the key, then IoT Hub owner. This is created for you. So just click away, and then you have the connection string just there. Copy it and paste it into your device explorer, and you're all done. So now we can go to the management tab and click add. So we could have done this from code, but since we're only going to use one device, this will just, this will work perfectly. So I'll name this RoboSapien, and I just press create, and I get it nicely in my list. Then to create your application, I have chosen to create a universal Windows platform app. This will, of course, work with any app. So the first thing you need to do is to install the Microsoft Azure device client Nougat package. Once that's done, you can go to this URL. Microsoft actually helps us even more with this. So this is not going to be an Azure IoT Hub deep dive. I want to open your eyes to see how cool and how easy it is to communicate through the IoT Hub with Bluetooth devices. So the first thing, you go to this portal, you click connect your device. You will see a list of some of the devices you can connect to the IoT Hub. I have chosen use another device and Windows. And Microsoft is actually going to provide you with all the code you need. How nice is this? Just copy and paste it into your application, and you're done. You also need a connection string to your device. So that's the same connection string you can find in the Azure portal, but you need to add a device ID. But the device explorer actually gives you the correct connection string. So you can just copy it from there. So let's see. Bluetooth Internet of Things robot. That's a really boring name. It should be called Bluetooth Robiot, right? Of course. So there we go. Much better. Right. We're about to demo something. Now, actors know that they should never work with children or animals. As a speaker, you know that you should never work with Bluetooth that can be disturbed by 1,000 people in the same area than you. We also know that we should never rely on Internet. So that's exactly what I've done. I'm relying on Internet and Bluetooth. Okay. So let's just hope this works. There we go. So this robot actually has a bug in it. So I have to pair it every time I use it. This is made for parallel communication, but it will work, hopefully. I'm actually talking to Huawei trying to solve this. So here I just, I went into the Bluetooth settings and I paired the device. Let's go into messages. I'm going to show you the code in a while. So let's try, let's do an explosive demo. C4. Sorry, I need to start the app as well. There we go. Let's try that again. So now I'm actually controlling my robot through the device explorer into IoT Hub, then back to my application and then over to the robot. So let's try something else. See, I actually have a list of things it can do. So let's try a seven. So all the commands that is available through the infrared controller is also available through Bluetooth. So let's take a look at the code. So what I've done here is that I start, I create a device client. I create it from my connection string. This is the connection string to my device. Then I have a robot sapien class. This is the class that talks to the robot sapien. And we will look at that in just a second. And then I say robot sapien connect. This is going to try to find the device and connect to it. And then I also have a send event. So the send event is going to send robot sapien connected message to my IoT Hub. And if you look at the device explorer, let's see. If I monitor it and I start it. So if I start the app, it will connect to the robot sapien and I will get the message robot sapien connected if I monitor the Hub. And then I have received commands. So this is going to check, do I have any messages on my Hub? If I have a message, I will convert it, I will get bytes. I will convert it to a message. And then I will call the send command on my robot sapien and send in the message. So in this case, I'm going to get the bytes for C4, for example. Then I'm going to convert it to text and send it in to my robot sapien. So let's take a look at the robot sapien class. So here I'm actually not using my helper methods just to show how easy it is. So first thing, get the service quit. Get the characteristics quit. Then I connect. I try to find the device selector. So I try to get all the devices that implement this service. And one way is actually one of the examples that reduce their service quits. So I have another robot that does the, that works completely differently, has different characteristics but the same service ID. I actually think it has the same characteristics ID as well. So I get the enumerator and I get the device. Here I check does it contain robot sapien because the other robot is called MIP. I get the service and I get the characteristic. Then we have the send command. So here just take the characteristic and write it, write the data straight to the characteristic. I also have a method here that converts my C4 string hex to a byte array. That's all it takes. So I can send commands up to the IT hub. Then it will return to my application or actually every application that listens to that particular device. And then it would just send the bytes out to robot. So here are some resources. Microsoft actually has a lot of great samples on their GitHub account. We have the Azure, sorry, the device explorer. And then we have the URL to my MIP WinRT SDK. Do we have any questions? I'm soon done. Thank you. Do we have any questions? No questions. So then let me round up just to say that as I mentioned before, there are three billion devices out there that were manufactured last year alone. It is so much fun to do this. And if you have any questions about this presentation or Windows development in general, or if you ever in Sweden, please come by coding after work. So I'm kind of ahead of schedule, so I'm going to let you go. So please contact me if you want to. Thank you.
The market totally explodes with new gadgets that can connect to tablets, pc's and phones. In 2015 alone there were 3 billion Bluetooth devices manufactured. In this session I will show how you can take advantage of that and how to communicate with devices that are Bluetooth enabled. I will also show how to figure out undocumented devices and how to use Azure IoT hub to turn them into IoT devices.
10.5446/51851 (DOI)
Hello, Oslo. Thank you, right? Hey, you guys. Thank you very much for having me. This is amazing. This is my third year, and I just love Norway. It's just awesome. Let me tell you a few things I like, because it's just been a wonderful couple of days so far. One thing I like is that I did some research, and it turns out that I'm an average Norwegian, 180 centimeters, unspecified weight, and I'm still working on getting the IQ up to the average. So if I applied for a residence in Norway, I might be just your average guy in Norway, which would be awesome. And one thing I really admire about Norwegians is they speak English like to perfection, like awesomely, like idioms, everything. They got everything. Even they got kind of the not so nice parts of American English. For example, sometimes some Norwegian folks, I want to name names, they say they end the sentence like a question, you know, like, hello, I'm Jonas. I work at Cisco. I'm not like, well, I don't know your name, but Jonas sounds plausible, and I can fix your job at Cisco. I can't help there, but I'm glad you asked. Well, it could be the opposite, which is new, was new to me. There's folks who say a question as a sentence, instead of saying the verb, they go, and the other night I was, I ate at a hotel and then I went to have a beer. So I go have a beer and I sit at a table and this guy comes like, like a Viking, like two by two, okay, two meters by two meters. And he comes and says, are you having dinner? But it's like final. It was almost as if I heard you're having dinner. And I was like, well, I didn't plan to eat again, but like, there's nothing I can say, so order a salad. What can you do? So I presume that guy was actually a bouncer at the restaurant because he was like too solid and too muscular to be just, you know, servers. So maybe they were shorthand and they said, you know, Trigve, why don't you come help here because you're shorthanded with the waiters. And he was like, sure, tips are good. I don't know why people always give me these great tips for some reason I don't understand. So anyhow, I love Norway and I also like one more thing. I'm going to take one 30 seconds for this because I think it's wonderful. I like the, if I can pronounce it properly, egg-alitarianism in Norway. Was it understandable? All right, I like it. The other night I was looking at, I was watching TV, like just flipping channels, not understanding anything like, oh, okay. So that long. So I found like a documentary from the prisons in Norway. So they show like the interior of prison cell, which look like, you know, awfully good. And it was like an eerie, and it was like, oh, this is deja vu all over again. I've seen this somewhere. So I'm looking at the prison cell and it's like, this is familiar. Where did I see this? And then I look around me and it's like, oh, it's the same furniture in this hotel room here. Same thing. I was amazed. But anyhow, well, you guys are great. But I got to say, you destroyed Trojan this morning. Like the keynote. I thought it was hilarious. And everyone was like, yeah, well, I got that. Next one. Let's wait for the next joke. Where he had like one after another. I was chuckling a couple of times. I couldn't contain myself. And there was a little look and I'm like, what's with this ass hat here? Like, what's wrong with this guy? As if I were in the library and I was supposed to like, you know, keep shot, keep my mouth shut. And so in the end, like, okay, so Troy has like this great talk, hilarious talk. And I was like, why, why isn't everybody laughing? And they're like, there's some golf cap. Okay. Yeah, we appreciate your your attempts. And then I talked to the guy, he says, well, and I say, oh, this was brutal. And he says, Oh, this is as expected. I've been in Norway before. So, so, all right. Forget all about all that. We're going to talk about efficiency today. Now I have like a great task on my hands right now. Have a great task on my hands is a huge pressure. Here's my here's the here. Here's the pride. Here's what we're going to do tonight. Last, last talk of the thing. I hope you're not going to be late five minutes. Because you know, I had to tell all the stuff right about Norway, right? I will gonna be late a bit. I have many slides I have, I must to share with you new content. And my hope is that by the end, by the end of the stock, all of us are going to have a sort of a mini imaginary hat, if you wish, which is our efficiency hat, which is whenever we wear it, we're going to be able to look at things in a slightly different way. Just slightly different, which is going to allow us to find and improve those points in our programs that are less efficient than they might be. And you're going to, you're going to see tonight how we're looking at a few absolutely old classic algorithms, literally from 50 years ago. And we're going to improve them together in ways that nobody knew how to for 50 years. I'm not kidding. First, I'm going to talk about sentinels. Who can tell us what a sentinel is? Who's got the Viking blood in them? Come on. What's a sentinel? Yes, please. A special value you put somewhere so that you can track its position in some other collection. It's a special value that you plant ahead of an operation to make sure that you can arrange computation in a more efficient way. Thank you. So awesome, right? So we have this sentinel notion and that's great because there's a joke which my hope, most of you don't know yet. There's a thing with, how does a program find an elephant in Africa? Does anyone know? Great. So this is new to you. There's one guy. Well, can just phase out for, tune off for a second here. So the joke was, is you put an elephant in Cairo and you start from South Africa and you go up north and if you find an elephant, you're good. You look at are you in Cairo or not? If you're in Cairo, then you didn't find any because you found yours. Is that what I'm saying? Same about find, right? Let's look at the find routine. That's very simple and I'm going to use the D language for examples here because it fits on the slides. Thus I define a generic function in the D language. You just put the type arguments first and then the regular function arguments and I'm going to use index taxes just because it's easy. I'm going to just talk a bit about it later. So we're going to have a loop as easy as P as pi. So I have for i equals zero all the way to the length of the range which is kind of a sort of an array. You can think of a portion of an array or a pair of iterators. If you wish. I'm in current in this good guy and if I found the element, I'm going to break out of the loop and now what I'm going to return is the range from that particular position all the way to dollar which is a symbol for the end of the range. So if I have an array with numbers in it, I'm going to look for it and I'm going to return the portion of the array positioned at that particular number that I found. What if I don't find the number? What do I return? Talk to me. I don't find the thing. When does the loop break? When I equals length and here I'm going to return r from length all the way to length which is the empty range, the sliver of the empty range just at the end of the range which is an empty, is nothing. Right? Just like in this here, remember you return and when you're done searching and you found nothing. You return and in this case you're returning the empty range. This function has been implemented millions of times in millions of languages. Everybody knows it inside and out. So today we're going to make it faster. How do we make it faster? Obviously by the incentives I presume, right? So how do ideas talk to me? Yes? Sorry? Oh, yeah. So save this guy in a variable. Yes, actually it turns out that the compiler is very good at doing that already. So in good shape. Thank you. That's a good start. It's the right hat. Other ideas? Sentinels? Right. How about by taking, I put it at the very back of the range, the last element instead of the last element there and then I run the search. What do I save Hubert? Yeah, so I got to save that. What do I, what computation do I save? Awesome. Thank you. Yep. This is right. I find my elephant and then I'm going to look at my entire is, is it my elephant which probably painted pink, you know, whatever. So I'm marketing some way. Yeah, it's a good point. It's a good point. But Hubert, let me, let me do the talk here. I have the microphone. All right. So I save this work and this is important to save even though I add some more stuff here and here because it's my core loop. This word is going to be like disproportionately much executed, right? Awesome. So let's do that. I'm going to save as Hubert said, I'm going to save the last element of the range in a temporary C and I'm going to push E. I'm going to just put in, in, in its place there and then I'm going to do the loop. There's this whole scope exit thing, which means whenever this scope exit is finished here, I'm going to put the element back just to restore the previous state of affairs. And then this is my savings. This is the core loop. This is where the action is. This is where it's at, right? This is where it happens. So I have nothing here. So I'm going to save time. And here I do that whole cleanup that we're talking about. Was I just the elephant in Cairo? Is my elephant? Is it pink, et cetera? So this is my extra word that I'm doing, but I don't care because it's, I do it once, right? So this is the fix up. So that's the prelude. That's the core. This is a fix up and I'm done. Terrific. Now, a few details. I do this whole scope exit thing. What if something throws an exception? So I just want to breeze the range. What if the range doesn't support random access? What do I do? This doesn't apply, right? It's not, it's not going to apply anymore if the range does not support, yes. I've got to fall back to the original version. I've got to fall back to a conservative version, which doesn't do all of these tricks. So there's a number of things I'm not going to focus on, but if you do want to focus on those, have a talk, which you just searched for, ACCU 2016, and my unpronounceable and unwriteable lost name, and you're going to find that talk, and it discusses these things in more depth. I eliminated that part because I want to share new material with you that's not online. Essentially, it's never to be found. It's nowhere to be found. Okay? Great. So it doesn't work for all types, but I'm going to focus, consider things like integers and sort of the basic, the civilized types, right? Very nice. All right. Question. Does it work or not? What do you think? How much improvement are you going to get? I don't know. He gives like 1%, give it 1%. Who gives 1% or more? Oh, okay. Who thinks it's going to be negative? Negative performance? Depends on the range. Depends on the number of things, such as the range size, because then I do a right, even though the range has like five elements, a right is going to be a disaster, right? Okay. Let's say we kind of focus on the longer side of the, like, you know, hundreds, thousands, millions of elements, because that's where it's at, right? So does this help or not? Who gives it like positive performance, more than zero, for large, long ranges? Thank you. Yes. Great. Who gives it 5%? A few hands. 5%? I can't believe this is happening. All right. So actually it turns out it's pretty good. It's pretty good. We have time in milliseconds here and we have, like, millions of elements here. We're talking floating points, search in a floating point array. And just to make it clear for you, I plotted the speed up in percentage compared to the baseline, which is the conservative, the simple find. And guess what? We get, like, awesome speed up there between, like, 20% and 60%. We didn't eat, you know? I could roll with that. That looks, you know, it's something I could endure to live with. Yes? Great. Excellent. You would think, however, so far I've bored you because you knew all this stuff. Or worse, if you didn't know it, you didn't know it because it's not fashionable. It was, I'm not kidding, it was fashionable in the 70s. It was like, oh, those sentinels, you know, put that in a core of ferrite, put a sentinel there, like, you know, scribble it by hand with a knife or whatever, and that's my sentinel right there. You can literally hold it in your hand, right? Put a sentinel in there. It was a very popular technique. And then it fell out of use. It's still very good, actually, it turns out. You just have to do the proper specialization in introspection such that you apply it only when it's applicable. For example, how about this code again? How about multiple threads execute this code? You don't. That's a bad idea. I mean, that left, I just talked to you off about that, right? It's not what you want. It's not what you, look at it. Yeah, it's not what you want. Definitely not. So there's a number of situations in which this doesn't apply, but you should apply it whenever applicable. So, you know, but this is like old things. Let's talk about new things. How about 20 horse partition? Partition, the workhorse of quicksort. Who can define what partition does? Partitioning like an array. Partition. I give you an array, I give you a number. Partition it for me. What does it do? Huh? Split into parts. What are the properties of those parts? Somebody else. Yes, please. Awesome. So choose a pivot and then put the little things on the left and the greater things on the right. And that's your partition. It's used by quicksort and quicksort is like a very important function, right? It's also used by median and element, which is also very important in a bunch of algorithms, right? So, okay, so this is nice to optimize partition, right? That's all I'm saying. So let's look at the baseline partition that is already highly optimized. I'm using a label break here, by the way, like in Java. Put a label here, put a label break here. Just want to do like zero redundant computation. That's why I need this. Otherwise, I would need to use a go to. And believe me, I never use go to except when I do. I never use it except when I do because when I do, it's awesome, right? So I do it like only for a reason, but I never do it. I never use it. Never in my life except when I do, right? So because of this label loop here, I can go like this. Well, here's how it works. I have low and high. So I have two variables. I got a mirror this for you. So I'm doing some awesome brain thing in real time, like this is left, this is right. All right, you see? I'm an average Norwegian. I have the IQ, right? Okay, so I go from the left. I go from the, and then I find something that doesn't fit, that doesn't, you know, it's too great. It's a greater than the pivot. Now, come from the right until I find something that is less than the pivot. What do I do then? It's not on the slide. It's not on the slide. Talk to me. Swap them. Whoop. Put them in place. And then what do I do? Go on. What do I do then? Swap again. And then what do I do? And then they meet and I'm done. This is your partition function. It's, I mean, think of this. It was 1956 when it was even invented. At that time, it was a, it was like one of the first algorithms ever. You know, it was amazing that it, like Tony Hoare, like genius. And let me tell you this. He also invented quick sort. It took five years for people to get the correct implementation of quick sort. They had bugs for five years in it. I'm not kidding. So it was very hard back then. Isn't it? Like, yeah, you know, I know how to do this, hopefully. All right. I hope I have no bugs here. So this is the loop that goes from the low portion up until I find this guy. Right? This is my pivot because I put it here. Swap. I'm swapping into the first position just to put it on the left. So this is my part. I found the left bound. And then I'm continuing with going from the right with the other bound, with the high bound. And I stop if I have the pivot greater than or equal than the element, the current element. And I found the right bound. And then I swapping and making progress. And this is my function. At the end, I just a little fix up to put the pivot back where it was and I'm done. And this is my partition function. It's a classic implementation. You would find a variant of it in the STL. Now, there's a few subtleties about it, which are interesting. Here I could use greater than and it would still be a correct algorithm because, you know, I go with law until I find something that's strictly greater than the pivot. I can just go through the equal elements. I'm doing fine. And there's a very subtle problem with it. What is that? Who can tell us? It's an extremely interesting problem. And similarly, I could use greater than here instead of greater than or equal, which means I go through the equal elements. I skip the equal elements. I leave them in place. What is the problem there? Ah, genius. If all the elements are the same or you have a lot of elements that are the same, what's going to happen? Somebody else. Somebody else. Another student here. It's an interview question. Well, what happens, let's say everything is equal. What's going to happen then? Well, what I have here is this law is going to go all the way up. Like I start from the left and everything is the same. So everything just goes through. And I say, oh, so my partition partitioned the array in a completely useless way all the way on the right. Whereas I want partition to have some fairness in it. I want if they're all equal, I want some in the middle. Because that's the nice thing. Why is it a nice thing? Because when you quick sort, if partition goes all the way then what do you get? Quadratic performance. Awful. So partition must be fair. Meaning whenever you have equal elements they must be fairly distributed along the two halves of the array. Very interesting. It turns out there are people who had these bugs and they're awfully difficult to diagnose. Because this is going to pass all unit tests. Unit test is not going to help this. Agile is not going to help this. Scrum is not going to help this. It's not going to be helped by those techniques. So you've got to kind of really, it's difficult. Anywho. Let's make it faster. Thoughts, ideas. I give you a hint. Let's use a Sentinel. How do we use a Sentinel in this case? It's a bit more complicated. It's a bit more complicated. So consider this. The first line there puts the pivot in the first position in the array. At the front of the array I have the pivot already. So I know that the first element is kind of less than or equal to the pivot and also greater than or equal to the pivot which is good information. But on the other side I don't have a Sentinel. So I have one on the left side but I don't have one on the right side. What do I do now? Put one on the right. Save the element. Do what you did with find. So we could do this. I did it. It doesn't save anything. It's just doesn't add up. It doesn't add up. The savings is like 0.5%. It's like nothing. The function is more complicated for no good reason. So now I thought about how to minimize work here because this is what we're looking at. We don't have an agenda to plan Sentinels in places. We have an agenda to optimize software. This is what we want to do. This is what we're up for. So we're looking at minimizing work here. So I want to consider this for example. Let's tell me what's overhead and what's legitimate work. For example, is this work? No, it's overhead. I'm comparing limits that have nothing to do with the algorithm. It does not work. Is this work? Yes, I'm comparing elements against the pivot. I must do this. There's no way to not do this. I must look at the elements and compare them against the pivot. This is like whatever I eliminate, this has to stay. If I don't do this, I'm not looking at all the array. I'm not doing the right thing with all elements. So this is work. This is overhead. I'm incrementing things. I'm doing these things. These are overhead. This is work. The comparison between low and high is overhead. I have a lot of overhead mixed in with the legitimate work. So we want to reduce these overheads, these comparisons, and these indices. This is also overhead, but I don't care. It's just that the end is not in a loop. So who gives a damn? Great. So let's reduce work. Here's an idea. Here's an idea. How about we plant a centenar in the middle and instead of going all the way to one end, we put a centenar in the middle and we meet in the middle and it just stops there. You can't because you don't know where the middle is. You don't know where you're going to end up, where those guys are going to meet up. You don't know. So we can't do that. So I thought about that for an hour or so. It's crazy, right? Everything I'm telling you here and I'm saying it was easy, it's like, you know, sleepless nights, friends. I mean, no, it's like, you do the same, right? I mean, come on. This is group therapy, right? Let's be honest. Okay. So we can't plant a centenar in the middle. So we need to put one at each end. Now, here's a key idea. Let's put centenars at both ends indeed and we're going to create a vacancy. Who knows what a vacancy is? Vacancy, not at a hotel. What is a vacancy in nuclear physics? Vacancy. Anthony. It's a missing electron in a crystal. And what happens if you have a vacancy in a crystal? Attractions. Yeah, there's some, you know, 16 as the right, some attraction going on. So whatever is a hole, another electron is going to be prone to fill that hole, that vacancy, because it's an imperfection. They're going to, by attraction, they're going to just want to migrate there. What's going to happen when that electron is going to get into the vacancy? Another vacancy. So, you know, it's kind of essentially moving the vacancy surround. And essentially, physically, what happens is that vacancy starts to behave like an electron with a positive charge, like a positron, except it doesn't explode when it, you know. So it's like a positron. It's a vacancy. It moves the other way. So electrons move this way and the vacancies are moving the other way. And guess what? This is how semiconductors work. Like, there's a bunch of vacancies floating around this laptop right now. I'm not kidding. All right. So we have this vacancy idea. So here's how we apply this idea. We created an imperfection in the array. So we have an array of numbers, and we put a cent in us, and then we create a vacancy hole in the array. We kind of say, ah, this is kind of an empty position in the array. And then what do we do? Once let's say I have vacancy on the far right here. What's the first thing I want to do? Fill it with what? Appropriately. Right? You can't go wrong with that, right? Approval. How do, what do we, so we come from the left and we're looking for a big element, something bigger than the pivot. Once we find it, we fill the vacancy. What happens next? What's the next phenomenon that happens? I have a new vacancy here on the left now. Oh, what do I do now? Genius. So I come from the right, and I get a fill the vacancy. Whenever I find something little, I put it in the vacancy here. And now I have a new vacancy here, and this way I'm moving the vacancies. I'm moving the vacancy for how long? When, when, like what's going to happen, like in the limit? I end up somewhere in the middle, like at the, you know, the right partitioning point. And then what do I do? So I, so they meet the vacancies right there at the partitioning point. Everything is greater. Everything is less here. And I put a pivot back. And I'm done. So I got to save the pivot, you know, et cetera. So this is all like a simple matter of programming once you got the idea. It's going to be like three slides. Yeah, three slides. It's a long function. I'm not expecting you to understand every comma of it. Right? It's as complications. So this will be the prelude. I'm going to put the range, put a pivot in the first position. Right? Well, I have the special case here. I'm going to save the pivot, save it, because right now is at the left most point of the range. So I'm going to save the pivot. I put it there. And now I plant the pivot in the end as a cent in L. So I'm going to save the old position of the last position. This is like length minus one. So it's the last element in the range, the last element of the array. This is my last element. I'm going to save it. And then I'm going to, bam, slap the pivot onto it. And that's my whole. That's how I start with the vacancy on that far right. And then it's all, as I said, a matter of programming. And it turns out I get to save a bunch of work. I get to save a bunch of work and I need to test only one for two of these long iterations. I save a bunch of work. It's pretty awesome. And you know, let me kind of give you one more detail, which I find is interesting here. All of this vacancy filling and stuff you can think of as a half swap. Because people usually think in terms of comparing and swapping things when they think of sorting, partitioning, et cetera. Actually the whole vacancy business is half a swap. And that's great because it's more economical. What's better to do like the temp and, you know, that three steps or just do half a swap and get a half from the other, from another point. Awesome. I'm seeing some nods here that reveal that, you know, I may get slowly, slowly to the average Norwegian IQ there, which is awesome. Thank you, that left. So getting there, right? So the work has become a matter of moving the vacancy around and then in the end we're going to fill it back. This is my core. It's become a lot cheaper. It's become a lot cheaper and it does a lot more work, a lot less overhead. This is overhead. This is work. This is work. This is overhead. This is work. This is overhead. And this is work. It's like almost 50-50 and some of it just can't escape, right? Awesome. In the end it turns outside. I've kind of analyzed the thing, run through it. And in the end there's a bunch of fix up you need to do. But this is how it works. It's unit tested. What can go wrong, right? So, you know, there's a number of cases you need to fix up at the end because you're kind of running real crazy with these indices here. Just kind of low, high plus two and etc. So there's a large fix up part here. I don't care. It's done once. All right. So we're done with this. Let's see how good it works. How well it works. So time. Milliseconds. Oh, that's pretty cool. We're looking at somewhere between like 3-5% all the way to like 25%. And guess what? This is a function that has not been improved in 50 years. It's a big deal. And moreover, this savings is going to transport into sort because sort, all sort does is partition. Everything else is just smoke and mirrors. Most of the workhorse is partitioning. Right? So with this, it means I get to improve sort significantly. And that's awesome. And it's new. And I hope it convinces you that, you know, I got to look at this sentence things. Right? All right. So I thought I smart. I thought I'm above the average Norwegian IQ. I'm not. I am not because I did some researching of the topic. I was like, I thought I'm awesome. But actually, who's awesome? There's like three other people. Two Indians and one Russian. So found this paper from 2011 by Abhiyagnar in England. And which is like, it's very funny because it's one of the few papers I read which is a research paper and has the title research paper. I'm not kidding. So it's a paper entitled research paper, which I love because it's obvious what it is. Right? You can't go wrong with that. So it's a research paper and title research paper with a subtitle, engineer, quick sort partitioning algorithm by these guys. And they have the same vacancy. They describe the same vacancy idea. I think it's a bit inferior because you can't choose any people you want. So it's just a bit more limited than our version. But this is like, this is a core idea. Also, Stepanov has recently, has a nice PDF on his site. If you just look, if you just Google for Stepanov partition, you're going to find a nice PDF, which discusses both sentinous and vacancies. He calls them holes. And the nice thing is the discuss sentinous one chapter discusses vacancy the next chapter and the end of the exercise. Please combine both. Right? So essentially I implemented the exercise if you wish. But it's awesome that this is what's interesting is not that this work exists, it's that it's recent. It's not 1975. Right? It's recent work. Like 2011, Stepanov wrote this like what? It was like the 2000s. Like this is recent work. And it's interesting because with the right, with the right attitude if you wish, with the right way outlook on things, you're going to find opportunity to optimize things. You may think I'm cherry picking. I'm not cherry picking. There's a bunch of work you can apply sentinous to. That product, those parts vectors, also applications, set intersections, same patterns. You want to merge sorted lists. You know, the merge function, right? Merge sort can put a sent in at the end. Like seeing a part, you put a send in at the end, you parse, you mix a lot faster. So a very, very great many, so we have like until like 40 past. Awesome. Actually I can spend a minute here to talk about these. That product, those parts vectors, you have sparse vectors which are index and value. And that's sorted by index. And you go whenever I find equal indices, I add it to the sum, right? How do I apply the sent in out here? So that product, it's sparse vectors. Most things are zero in a sparse vector. So you only store the non-zeros together with the index. So I have a sparse vector which has like element like 3.2 at position 42. And it has like 0.1 at position 100 and so on, right? So this is my first sparse vector. And I have another sparse vector which has the same structure. And whenever you multiply these guys, you need to find the same indices and multiply those guys together and add them up. That's a definition, scalar product, right? So how do you apply a sent in out to that layout? Yes? Stick something at the very end. It turns out there's subtleties. For example, I don't know. So we've been talking about this on a forum. And essentially one of the best approaches is to put the size t max. Like the largest size t at the very end of the largest thing there. And then when you go through with it, you just have to do one test out of three instead of doing on each branch. So there's subtleties but it can be done. Same deal about sent in the section of merge and store lists. Lexi and parsing. How does that work? Lexi, like Lexi in a large file, like a C++ file after pre-processing I may add. You know how big it is? 36,000 lines. 80 megabytes. It's amazing. It's amazing that the compiler can go through with it. Like it's a miracle of technology. It can do it in two seconds. Right? So you pre-process hello world. I'm not kidding. You pre-process that I actually have it. You know what? I'm going to show you. No kidding here. All right. Is this even visible? Oh, by the way, I'm building my slides with LaTeX. And what you saw, the graphics and everything, the plots, they were generated this morning in real time. It's part of my build. So the slides are built running the benchmarks. So that we don't want to do make dash j because it's going to run two benchmarks at the same time. It's going to be messed up. Right? Anyhow, what was I? Was I proving here? Oh, hello world. Hello. Hello.T. I don't know. Word count. 37,000 lines and 538,000 lines if you round up. And I don't know. Let's see the characters. Look at, oh, that's like 1.2 megabytes. And that's hello world. So that's like, so this is it. This is the program. I'm telling you the sheer fact that hello world compiles, gets into an object file, gets linked with the linker libraries and stuff. And nobody knows how a linker works. Right? The sheer fact that it goes through with it, it's a miracle of technology. If I want to explain this to a guy who didn't know computing, they would say, I'm kidding there. This can be right. This like, what's next? e to the power of minus i pi is minus one or what? One or what? What's wrong with it? How do you apply this to lexian parsing this large hello world program? So here's the, here's an idea. Let's say I read a file in memory and I plop a zero at the end, which is an invalid token, an invalid character in any C++ file. And then any lexian is going to have a huge big switch statement with like, depending on the current character, it's going to do different state things. Right? What do I save there? What am I saving by means of planting a zero in a big switch statement? You save one comparison per character because otherwise you have a switch and you have at the top of the switch you have, while I'm not at the end of the file. Right? So you have a big as if, inside, and inside of it a big as switch. And inside the switch you have a nice table and everything all the compare, no such optimize that. But outside that you have one if per character, which is crazy. So triple the speed of things if you just put the zero and you put the zero as a case inside the switch, because then the cost of testing for that case is divided by the size of that big table. Right? This is awesome. So sentinels are very much usable outside the sort of the obvious find I'm going to find I'm going to plant an elephant there, etc. Right? So you may still think I'm cherry picking here. So let me actually switch gears here and continue with this hat on. Let me discuss a different algorithm. Who knows about selection? The selection problem. Who knows about nth element in STL? nth element. Yes, please. I saw a hand. You just say you heard of it. You know of it. Okay. Somebody else except the thundering voice Huber Matthews here. nth element in the STL. nth element, I give you an array. I give you a number. You give me, Mark? Awesome. So I give you an array. I give you half the length of the array. nth element is going to put the smaller things on the left and the greater things on the right. That's actually giving me right at the middle. What's the element that would be if the whole array were sorted? To rephrase. Give an array and a number and find what A of n would be if the array were fully sorted. You just give me one element. And you know, on the face of it, if you approach it sort of naively, just sort the array and you're done. Just sort it and you have your answer. But actually there's a cheaper way to do it. Right? There's a variant of it. Also like Mark said, you want to place everything that's more than A of n to the left, everything that's greater to the right. There's a relationship between selection and the partition we just discussed. The partition, I choose the pivot, I partition things, but I don't know where the exact one I'm going to end up. With selection, it's exactly what I want to be in this position and I want to put everything less and everything greater. Applications. Giving the top ten salesman, giving the top 100 salesman, finding the median height of the Norwegian male age 46, 180. This is it. I'm not kidding. Right? So it's competing the median is very important in a bunch of applications. Great. Now, so this is also like much more important applications such as shortest path and nearest neighbor. So it's just like a lot of computational geometry algorithms end up doing median at some point or another. So it's an important algorithm. So for those of you who don't know about QuickSelect, I'm going to introduce it to you because I think it's a very interesting and fascinating algorithm. Who knows about QuickSort? QuickSort, most of us, right? Who knows about QuickSelect? All right? I'm very happy to introduce QuickSelect to you. Here's what QuickSelect does. It's very clever. So QuickSelect is what it does is it does the partition part like QuickSort. But, you know, QuickSort then recurses to the left, to the left and recurses to the right. Right? Okay. QuickSelect partitions and get some pivot position. And then it looks, what am I looking for? I'm looking for an index here or here. And then lazily kind of not lazy cleverly, all the recurses on one side, either the left or the right. So whereas QuickSort needs to fully sort the array, QuickSelect is like, I don't care for this half because my index is not there. What I'm looking for. I'm just going to recurse on one side of the, of the divide, of the pivot. Very interesting. So you recurs only once. What do you think the complexity of QuickSelect is? Complexity of QuickSort is all log n, log n, right? So QuickSelect, which only does half as many recursions is going to come up to. Obviously for those of you who are expert mathematicians, right? Linear. But like with QuickSort, if you choose a bad pivot systematically, you're going to end up quadratic. So in a way, you know, it's even, so the, the risk is even higher here because you go all the way from linear to quadratic. In QuickSort, you go from n log n to quadratic in the worst case, right? So with QuickSelect, if you don't have a good pivot, you're completely messed up. Great. So let me show you how nice and easy QuickSelect is. I just use partition as a primitive and all I'm doing is I recurse either here or here. I don't even care to recurse. I put a loop around it. I just reduce the range. A very simple function. Actually, I'm not kidding. This is production quality. So you don't even need to optimize. It's, it looks as good as it looks. It's just production quality. You can't make it much faster. So that's great. QuickSelect. Awesome. So just to explain how it works, partition is what we discussed. It's sort of, you choose a pivot, it gives you the partitioning. And you use, QuickSelect kind of cleverly uses it repeatedly to get an exact partition where it wants to, usually during the, around the median. So once your partition, you're good. Great. If QuickSelect gets to eliminate any fraction of the array, then you get linear time. If you get like, sometimes if you get like all the way on the left or on the way on the right, it's not good. All right. And as we discussed, the pivot quality is the problem here. And what, what techniques do you know for choosing a good pivot for QuickSort? The same problem applies. Ideas for QuickSort, good pivot, like find the something that's about in the middle. A good pivot for QuickSort. Yes. Find the average between the left and right. You take the, you take the median between probably like the, the left, the right and the middle. People do that. Right. You could, you could randomize. You could say I'm going to randomize and kind of choose some random element or choose three random elements, computer median. And that's my pivot. Right. So there are a number of heuristics that people use. What's the problem if you don't randomize? If you don't randomize, it choose like first, last and middle and you compute the median. Yes. You get degenerate inputs. You get some patterns in the input that completely mess you up, which are going to be quadratic. Right. And actually it's plausible because it can be machine produced. So actually there's a tax you can imagine that, you know, that's going to. So not good. What, what are the problems if you do randomize? That choose a random pivot? You may end up choosing a bad one anyway. And even though, do you know this phrase in statistics? It's almost never. No, I'm not kidding actually. You know what almost never means? It's a, it's a scientific term for those of you who don't know the science thing. Almost never means the following. Let's say I throw dice. Right. And again and again and again, many times. And how likely is it that I throw a six every time? It gets smaller and smaller and smaller and at, at infinitum, it's almost never. It's possible by the loss of physics. It's possible that could happen, but it's going to happen almost never. And that's what almost never means in statistics. The probability of you choosing a pivot, the, of the bad pivot by random sampling for an infinitely long array is almost never. Right. But in real life, let me tell you, so back to earth, folks, right, back to earth. In real life, what happens? It's front loaded. The first three, if you choose three bad pivots in the beginning, like Anthony said, you're messed up. Choose three bad pivots, you're done because that's the largest array. So if you, if you have, if you're lucky enough to choose the first three pivots badly, you're going to do a lot of work that you shouldn't. And your performance is going to be like three times worse than otherwise. Right. So even with random pivots, I'm not cool. I'm not cool at all. So that's what people do. We discuss this. So in 1961, five sacred masses of algorithms, Floyd, Rivers, Tarjan, Bloom, VFE, and Pratt. So five sacred months, like very good algorithms, people, they set out to prove you cannot do selection linear time guaranteed. There's no guarantee you can do selection in real time, in linear time. So you can on average, but you can't in the worst case. So they wanted to write a paper that would prove you can't. And they, they discovered an algorithm that does it. True story. So actually they're like, oh, so let's prove this can't be possibly, possibly doable. And they're like, oh, crap, man, look, it's working. It's working. I can't believe it. Right. So this is like what almost literally happened. So they found an algorithm which is called media of medians. It's a miracle of like human thinking. It's amazing human ingenuity that went into that algorithm. It's one of the most elegant algorithms I've ever seen. It's godly. And it has only one little problem. We're not going to insist. It has a little problem. It's three to five times slower than anything else. That's fine because on average, it's in the worst case, it's linear. It's just like if you, if you want to wait five times more than usually, you're going to be fine. So as a consequence, it's in all books and in no implementations. Like everybody reads about it, analyzes it. This is a good algorithm. I'm not going to edit it at all from now on. I'm just going to go with the random selections and stuff. So 1961, we have 10 minutes to break that record. I am not kidding. We have 10 minutes to break that record. We're going to implement a variant of this algorithm, media of medians, that's going to beat everybody. Add the TUSHY, everybody, and take names. Okay? We have 10 minutes. Are you with me these 10 minutes? It's like three folks in the back just leave and it's like, ah, the hell with this. I'm not kidding. We have like what? We have like a few more slides and we're done. Nine slides. Right? We can do this. So let's start with media of medians, the classic implementation. First of all, I have a particular case. If the length is less than five, I'm just going to do it like, so it doesn't matter. It's a few elements. No, really, I do it by hand. It's just, it's a particular case. But if I have more than five elements, is why I compute the media of medians. And the way the media of medians goes is you don't need to look at the indices there. The way it works is very simple. Divide the array in groups of five elements. So imagine like, first five, next five, next five, and so on. Compute a median for each of those five elements. So now I have a fifth of the array in those little medians. That's why it's called media of medians because I have like every five group has its own median. And I compute that by hand, brute force. Right? I have algorithm. I know how to do for five elements. I do it by hand. Right? Five elements, one median. Five elements, one median. Five elements, one median. Then I have a fifth of the array which is the medians of medians. And then I recurse computing the median of that fifth of the array. So I get the final result, a median. Now here's the property of that number that I get. So I compute the medians of each five groups of groups, or group of fives. And then I take that fragment of the array and I compute each median. So I get the number. It's going to be greater than half of the, by definition, it's going to be greater than half of those guys. Right? But for each of those, how many other elements are smaller than it? Each of those, which are, each of those like fifths, right, has two other guys that are smaller than it. Right? Are you with me? So for, I compute a median of those like that fraction of the array. And for each of those guys, I have two others that are smaller than it. By symmetry, I also have two that are greater. So this median here is guaranteed, is guaranteed to be between three-fifths and three-tenths and seven-tenths of the real median. It's not going to be at the end of the array. It's going to be within a fraction of the real median. Remember when I told you a while ago that if I get to eliminate a finite fraction of the array at each step, I have linear speed. If I get to eliminate, and I get to eliminate at least three-tenths of the array by this computation. And that's how the proof goes. You compute a median of five by hand. You compute a median of those medians. And then you're guaranteed to be greater than three-tenths, smaller than three-tenths, and you eliminate those guys, one of them, at least. And that's what this guy does. M5, this is going to be a brute force function. It just does it. Takes like five elements, five indices, takes a range, and five indices, A through E. I'm not kidding. I didn't write this. I wrote on a forum and the guy did it for me. I don't know his name because there's a student like TN. So there's a thread I'm giving credit in the paper. But essentially it goes like this. I wrote this. How do I compute a median of five with minimum swaps, median comparisons? And they're like ten folks who think they're from Mars. They think a different way than me because I can't do this. Other people know how to. I have no idea. And actually there was a guy who wrote a program that generated these functions. So you go, okay, so how many swaps and how many comparisons do you want? Right? I'm not kidding. So there's one with no swaps and only comparisons, and there's one with a bunch of swaps and fewer comparisons. So there's the spectrum there, right? Amazing. Anyhow, this works for five elements. I only tested it. And guess what? I can only test it for everything because how many permutations of five are there? Factorial. What's the factorial? You should compute it, my friend. You should compute it during compilation, right? 120. So you have 120 next permutation. The unit has rights itself. So this is our brute force median of five, and we use it here as a primitive for the groups of five. Very nice. Now here's what you do. So you take these groups of five and you put the medians, those medians of five, you put in the first portion of the array. That's why I'm having the swap here. I'm going to increment J every time. And that's because I want to reuse a portion of the array instead of allocating a new array. Makes sense? So I compute it, I mean, and then I'm putting it at the front of the array. And then the first fifth of the array is going to be the medians, and then I can continue with recursion and everything. Awesome. Now, let's make this faster together. Five minutes. So I gave you a background, median of medians, compute medians of five, put it in the front of the array, recurs, and you're done. And now let's make it faster. Let's make a breakthrough after 55 years. Ideas. So this is optimized up to Wazoo. I guarantee it. The M5 is good. So this guy we're looking at. What can we do better here? A sentinel. No, it's not going to be a sentinel. Because, because, oh, no, wait, I have everything here. This is the section, the section titled here. Minimize indirect rights. So because I want to minimize indirect rights, what are the most indirect rights happening? So I do the median five, which is highly optimized, I guarantee it. And then I do the swap rule here. And then what I'm going to do is I'm going to recurse to this fifth of zero, and I'm going to do a lot more swap rules. So that's where the most rights are going to happen in this, the swapping here. So I'm going to essentially like make this, these medians, I'm going to compute them and swap them and then swap them again. And it's just useless. It's a lot of overhead. So here's a key idea. This is a key idea. You've heard of this guy? Rob Banks. A journalist is asking, why do Rob Banks? Because that's where the money is. You sound unconvinced. It's like, my better run of Rob a convenience store because that's, they have $15.50, right? How about this? My costs are faster because I have more horsepower. Ferrari. What should you say? To get more speed, do less work. This is your hat. This is your attitude. This is your outlook. This is how you think in terms of doing less work. There are a few cases in which more work is more speed. Can someone, there's one case I know of, honest. One case, speculation. Speculative execution is more work and you may actually throw it away, but on average it's going to be more speed. I don't have any other case. In general, there's only so much, you know, work you can do in a finite amount of time and you want to do less work. Otherwise, you're going to have less speed. Let's do less work. Here's my idea number one. We have three ideas to discuss. Idea number one. Better layout. We use this first fifth of the array. We put the medians. You don't even need to kind of understand every detail of the algorithm, but essentially there's a lot of moving data around, right? During this computation of the medians of five. So it's kind of misses the point. We want to put, what do you want to do in partition? Put the little things on the left and put the bigger things on the right. So how about this? Remember our M5 routine, the brute force? What parameters did it take? The array, the range, and five indices. So it actually can, is able to swap elements wherever they sit in the array. So how about this? Instead of choosing like the first five elements and the next five elements and the next five, we're going to choose like this. First two elements, middle two elements, last two elements. We compute a median, sorry, middle element and last two elements. So we have two on the far left, one in the middle, two in the far right. That's my first group of five. What's my next group of five? The next two elements, right? The next guy in the middle there and the next two on the right. And I'm doing this medium five, which is going to statistically cleverly swap things such that the little things on the left, statistically, the median-ish things are in the middle and the great, the great things are on the right. Yeah, it doesn't rhyme. Sorry. So statistically, I'm choosing my index, I'm choosing my decomposition cleverly instead of choosing the first five like an idiot, the next five like an idiot, the next five like an idiot, and choose like a smart guy. The average IQ Norwegian, right? I'm choosing that as a smart guy. I'm choosing first two elements, middle element, last two elements. And then the next two elements, next middle element, and so on. So I'm going to exploit the fact that I can compute medians of any five indices, not only consecutive or contiguous indices. So we divide the right into five big sub arrays. Compute a median of the first, as I said, boom, boom, boom, right? So we're going to have, with the same swaps, we're going to statistically divide the right into smaller things, median things, and this is going to be part of computing them five. It's not, it's going to be sort of, I'm helping my next step with this step. Minimize work, right? That's what the money is. Minimize work. So after I'm done with this, amazingly, the medians of five are already going to be the middle quintile, the middle fifth of the array. So we're already done with one fifth of the job we're done. We don't need to move those guys anymore. It's done. It's a done deal. So the middle of the array is done in the first step, which was computing the median of five, the medians of five. And then we have the littler things here and the greater things here, and I recurse to one fifth of the middle array, and then I'm going to do the fix up. So amazingly, I'm going to swap a lot less element if you do the math is one tenth of the array. And we're kind of done there, which is amazing. So this was the big thing that took me over the, sorry, I thought about this for weeks on end. There's like, there's got to be a way to minimize those swaps. And I said, the key here is that you only, you need to swap from non-contiguous portions of the array such that the little things go there, the big things go there, and then you're done. That's idea number one. So it's more complicated, not by much. This is a whole function. This is a whole function. Not by much. So idea number one, boom, 50% speed up. Not there yet. Not there yet. Not there yet. But you know what the scientist does, they never give up, right? So I've got to rig the competition now, right? I'm kidding. So now we've got to make it better. But we gain 50% and moreover, we gained the right insight because then every other optimization is going to compound with this one. It's not going to compete with it. And on top of this optimization, which is layout, I'm going to be able to build more. And here's what I build more. Did my research. Chenan Dimitrescu. Dimitrescu is a professor at the University of Chicago. He's a fellow Romanian. You guys, you can tell from the last four letters. He's a fellow Romanian. And they proved a very nice conjecture. Actually, they disproved the conjecture by saying, you know what? You don't need group of five. It's okay with groups of three or four. And that's much simpler. It's cheaper to compute. So they kind of, they very cleverly have an algorithm that they call repeated step. And then I said, oh, so this is exactly what I need. A simpler medium of five. It's only medium of three, which is trivial. Boom. That's idea number two. Kind of Google, right? Stack overflow. Right? It's not a stack over. But you know, I looked for it and I looked for all the work in the area. And this was it. This was what I needed. It compounds with that. So I wrote the guys. I said, guys, do you do this with the, oh, no, we do all the, you know, we just, the algorithm was not fast, but it's fast because you can, you can pose it with this thing, with the better layout. So now you have better layout and you have this thing. Awesome. We got to 200%. We got to 200%. We have minus, like, one minute. We have one minute. We got to beat these guys. Kind of like three X. If we get to three X, we're done. The idea number three, adaptation. So I ran ideas number one and two, and it was great for media and it was terrible for anything but media. If you want like the top 1000 or the top one third, anything has not in the middle, it was bad. So I worked on this for a long time and then I said, well, you got to specialize. Use, whenever you're asked for something that's not straight in the media, the middle, the median, you use asymmetric groups, two or four or two or five. Use asymmetry to your advantage. If the index is small, you search a group of like two out of four and you kind of get something like that's biased to the left. If it's great, you get biased to the right. So as a guy said on Twitter, you choose a happy case of all algorithms. So if it's in the median, I do what I just said. If it's small, I do one of these tricks. I choose kind of different fractions of the array. And if it's great, I choose, you know, length minus those guys. Happy case of all algorithms. We're done. Awesome. So Google for this. Fast deterministic selection. You're going to find a very detailed draft paper on ArcSiv. I know how to pronounce it. ARXIV.org. So you're going to find this guy and it beats everybody to a pulp. Everybody. Like I did like carefully optimized implementations of all existing algorithms. This guy beats them to a pulp. The more important graph that you may want to look at here is the speed up compared to the classic heuristics. Medium or three, medium or three randomized, and ninth or randomized. Ninth or is sort of a medium of nine. Approximately. They call it the ninth or. It's a famous heuristic. So these are the best ones in practice. Medium or three, medium or three randomized. I also tried medium of five. It's about as good as medium of three. So I didn't put it to not clutter anything. So what we have here is improvements between like what? 10% here and like 70% over the best in the world. Over the best in the world friends. The best in the world. Better than the best in the world. We did it. We did it. To conclude. If you want more speed, put the hat on. You got to do less work. Forget the sentence. The sentence are a mean to an end. They're not the end. The end is less work. And we saw how sentence are part of it. But in the second case, it was simply less swapping. Kind of just choose your layout properly such that you minimize the amount of data movement access. The amount of rights you do, essentially. Marshall, the computation to benefit you most. Arrange your computation. Think of everything that influences your result and make it such that everything flows toward what you need, what you want. With minimal work. Do aikido if you wish in programming. Use your opponent's power. All that stuff, right? Okay. Don't do it. Don't do it like Vikings with the, you know, the accent there, right? Use aikido. All right. And most importantly, do your research. It was very important for me to find the algorithms that the Indians and Stepanov implemented. It was very important. So I had good baselines and I could credit them properly. It's very important to do your research. Know what people are doing in your field. And in this case, if I didn't find the Chin and Dmitry rescue paper, I would have been like blocked because I didn't have their idea. I didn't have their insight. And without that, I would have been like, ah, that's interesting, but I'm not there. I can't get the performance. So this treasure everywhere as Calvin and Hobbes said, you just have to find it. So, announcement. I'm writing a book about this kind of stuff. It's called Fastware. It's a bunch of optimization techniques such as how to benchmark, how to reduce strength, cash friendliness, indirect right to lesion, memoization, and the opposite of memoization, which is obliviation, hoisting and lowering and a lot more. Thanks very much.
To some extent, optimization is to our industry what sexual intercourse is to teenagers. There's a veil of awesomeness surrounding it; everybody thinks it's cool, has an opinion about it, and talks about it a great deal; yet in spite of ample folklore, few get to do it meaningfully or at all. Improving the ordeals of teenage years being too daunting a project, the next best thing to do is teaching how to write fast code. So Andrei setout to write a book about it. This talk is a sneak preview into some of the book's material.
10.5446/51852 (DOI)
Alright, well I think I'm going to start a minute early because it's pretty much time. So this talk is understanding PaaSer combinators. My name is Scott Voloshin, that's my Twitter handle. I have a website, fsharpfundandprofit.com and the code and the slides and the video will be on a directory called PaaSer at some point. So this is one of my talks where I try to squeeze like a whole day's worth of stuff into like 60 minutes. So as usual I'll be going very fast and covering a lot of ground. I don't really, as always, I don't really expect you to remember everything but if you can get just some of the concepts and if it gets demystified, that's the main thing. So it just becomes less intimidating. So I'm going to be using fsharp for code examples. The concepts will work in pretty much any programming language other than code ball or something. So here is some typical code using PaaSer combinators and when you look at it, when you first come across code like this, it looks kind of intimidating. The main thing is there's all these strange symbols. There's like a vertical bar thing and an angle brackets with dots and stuff and I think that's one of the things which is particularly intimidating. And so I guess the goal of this talk is if you can understand this code or at least not be intimidated by this code, that would be a success in my book. Obviously, it's really hard to learn everything in 60 minutes but if you can just look at this and say that kind of looks vaguely familiar, let me look that up and see what it does. So first of all I'm going to talk about what is a PaaSer combinator library and then we're going to build a very, very simple PaaSer which will be the foundation for the rest of the talk. We're going to build three very simple PaaSer combinators and then we'll start using those combinators to build more complex combinators from the simple ones. We'll have a little sidetrack, a little side excursion on improving the error messages and then finally we'll get to the core thing which is how to build adjacent PaaSer using these techniques. So what is a PaaSer combinator library? So when you write a PaaSer, there's something you're trying to match, it's a keyword or it's an int or a string or a float or something, you're trying to match this thing and in this model you create a step in a recipe, a PaaSing recipe and you end up with this object which is a PaaSer of something. So this PaaSer of something is really a recipe that when you run it later on it will give you back whatever it's the thing you're trying to look for. So you have these PaaSer things and then you combine these PaaSer things with other PaaSer things to make new kinds of PaaSer things. So this combination of combining things together, that's what a combinator is, that's all it is, it's not particularly mysterious but the whole point is that when you combine things you get things of the same type, you can then cascade that, you can build on that and make bigger and bigger things from small and small things. This is the whole concept of composition, why composition is so important and especially in functional programming why people are so strong on using that technique. So here we have in this case a recipe to make a thing C from an A and a B. And then finally when we've got the recipe, the PaaSer, we have to run it and when we run it we get a success or a failure depending on whether we succeeded in matching the thing and in order to run it we also need some input which is the stream of characters that we're running over. So that's it, that's PaaSer combinators in a nutshell. Why PaaSer combinators as opposed to something like Lex and Yak or Antlo or the various other techniques. So the first thing is that they're written in your favourite programming language which is nice, you don't have to kind of drop out into a different language to do it which means also you can use your favourite tools to do the programming in which is nice. There's no pre-processing needed so the Lexing and you know in traditional PaaSers you have a Lexing stage and a PaaSing stage and you transform the AST and all this kind of stuff. In the PaaSer combinator model that's all one thing and because there's no pre-processing it's very repel friendly which you can use it interactively which is kind of nice. Because it's such a small kind of thing you can use it to create little DSLs really quickly like Fog Creek, the software company, they used FpaaSerk which is the Fsharp library for this to write little DSL for PaaSing query strings or search engine and very, very simple DSL do you say something and something or quoted string and or whatever just like Google has a little very simple DSL. And finally from my point of view it's a fun way of understanding functional composition so even if you're actually not going to use it for anything it's a fun thing to learn because it gives you insight into what a nice functional library looks like. So let's start with a simple PaaSer and I'm going to create four versions of this PaaSer starting with something really simple and getting more and more complex. So the first version of the PaaSer is going to be just PaaSing the character A that's all it is. So in all this is going to be a function, there's a PaaSing function in this gray box and there's going to be an input which is a list of characters or a string or a stream of characters whatever. It's going to return to a false if it succeeds in matching the character and it's going to return the remaining input. Now if it matches the character it's going to consume that character from the stream and return the remaining characters. What's really important is that the inputs and the outputs are immutable just like in all functional programming and in particular that's very useful because that means if the PaaSing fails you can take another PaaSer and apply it to the same input. You don't have to worry that the file point has moved around and you're not starting at the same place. So that's a key aspect of the combinator design. So let's show you the code. So here's my Pchar, I'm going to call it PaaSer character A. If the input is empty I'm going to turn false. If the first character is A then I'm going to return true and I'm also going to take the remaining characters just by starting from index one and returning the rest of the string. And if it doesn't match the first, if it doesn't match A I'm going to return false. So that's really a brain dead kind of PaaSer. If you're not familiar with Fsharp these are actually the return values, you don't need a special return keyword in Fsharp. So that's what we're returning. Okay, that's version one. Version two is a little bit better. The problem with version one of course is it's hard coded for character A. Let's make it a bit more flexible and allow us to pass in any character. So I'm going to pass in an extra parameter now which is the character to match. So it could be an A or a B, it could get really exciting here. The other thing is the output is going to be a bit more complex this time because if it matches I need to return the character that I matched because I don't know whether it matched or not and the remaining input. And on a failure I want to return a nice error message. I don't just want to return true or false. I want to return a message like I was looking for an A and you gave me a B, something like that. So let's look at the code for that. Very similar. This time I'm returning a nice error message and I know what input. If it matches the character I return the match character and the remaining and if it doesn't match I have a nice little error message and I was expecting this and I got something else. The problem with this code is it doesn't compile because the return values here are different types. That return value is different from the other one. On the failure case I'm returning a string but on the success case I'm returning a pair, a tuple of the matched character and the remaining string. So this won't compile. So the way to fix this in a functional programming language is to create a union type, a choice type I like to call them, which is a choice of either case. So we're going to create a type and we're going to call it results and it's got two cases. It's a success or it's a failure and if it's successful it's going to have a tick A, that's the F-sharp way of saying it's a generic type, in C-sharp or Java that would just be like a capital T or something. So those are the two choices and then our output now has two choices as well. On the success branch it's the pair and on the failure branch it's the message. So let's see how the code has changed. So now that instead of returning the string we're going to return the failure of the string and instead of returning the pair we're going to return success of the pair and again on the failure case we're going to return a failure branch as well. So that's our version two code. Hopefully this is, I know I'm going through it very quickly but this hopefully is really kind of obvious, roughly again the concepts are really obvious, the details of the code you don't have to worry about too much. So the return values are now the same type and the compiler is very happy. Right so that's version two, how can we make it more complicated? So one of the things about this model is that currently we have this character to match and we know this in advance, when I'm looking for a particular character I know that I'm looking for a quote or a semi-colon or something, I know that when I'm designing the parser. The input, we don't know this in advance, we don't know the input until later on. So what I really want to do is be able to work with all the stuff with the information that I do know and kind of delay working with the rest of the stuff until I actually have it at runtime. So the way to do that in a functional programming language is to return a function. So instead of having a two input function like this I'm going to turn it into a one input function which returns another function. So you see there's before and there's after. So this thing is now going to return a function. That function is just like the original one, it's got this little input stream and it's got the result and so on. So now what we've done is we've got a way of building a function that when we run it with an input stream would actually do the work. And version four, we're going to take this function which is kind of awkward to use and we're going to wrap it in a type. We're going to just wrap it up so it becomes a thing. And we're going to call this thing a parser and in this case it's a parser which returns a char. And here is the type. This is where it starts getting a little bit ugly. So the type of parser, it contains a function inside the type. So this is definitely functional programming here. It's a function which has been treated as an object. And the function in there takes a string and it returns the results. And then we're wrapping it up in this parser type. So that's our basic parser. So let's see, we'll go back to our recipe model and see how it works. We've got this character we want to match. We run it for this thing, we now have this parser, right? And then we're going to combine all these parsers. And remember that a parser just basically means it's something that if you give me some input later on, I will then give you a success or failure later on. And when I want to run it, I give the input. And again, if you look inside the parser, basically it's just a function inside there. So all I have to do is run that function with the input and it gives me the success or failure. Let's look at how the run code works. So to run a parser with an input, basically you just unwrap the parser to get the inner function and then you call the inner function with input. So the running the piece of code that runs it is really trivial. All right, let's see some code. That's enough talking. So this is the real code. There's the result type. There's the parser type. There is the code that matches the thing. There's the run code, let's see if this works. And so here is a parser for the character A. There's some input. If I run that character, whoops, no I don't. Thank you very much. If I run that parser on that input, I get this down here. That's the success. This is the character that was returned. That's the character that was matched and this is the remaining string. But if I run it on bad inputs, like the first character is a Z, when I try and run it, I get this failure message. I was expecting an A, but I got a Z instead. So that is it. That is our parser done and dusted. Pretty straightforward. So hopefully that makes sense. That parser is that's it. We're done with the parser logic. Everything else is now combining these parsers. So let's look at some basic ways of combining the parsers. So this word combinator, it has a technical meaning, which is basically any function that depends only on its inputs, which is not really a very helpful function. So typically when we talk about combinators, we talk about combinator libraries, and a combinator library is a library designed around combining things to make new things. That's a combinator library. Very common in functional programming. You have a parser combinator library. You have an HTML combinator library. You might have a database combinator library. It's a common way of designing libraries. So here's an example of a combinator addition. Here's a combinator adding two integers. You get a new integer. If you concatenate two lists, you get a new list, and in F sharp, the at sign is the list concatenation. And finally, if you add two parsers together, you get a new parser. So here's the question. What are the different ways you can add two parsers together? What can go there between the two? So the first thing is you can chain them in sequence. So you can say, match this thing and then match this next thing. I'm calling it the and then combinator. Or you can say, match this thing. And if that doesn't work, match this thing instead. So I'm calling it the or else operator. And another really useful one is a map combinator, which basically says whatever you've parsed, transform it in some way into something else. So you might have parsed a string, and you want to transform it into an int, just because you know it's a string of digits, for example. So let's look at these three basic combinators. So the and then one. So this is the logic for it. You run the first parser. If it fails, give up. Otherwise, you take the remaining input and you give it to the second parser. If that fails, give up. If they both succeed, we're going to return a pair, because we've got a result from the first parser. We've got a result from the second parser. We're going to combine them into a pair and return it. So let's look at the code. So we're going to define an inner function, because we've got this inner function all the time. We run the first parser. We check the result. If it's a failure, return. If it's a success, we go to parser two. And now I'm going to page down to the next page. So there's the inner function. There's the remaining. And we're going to use that remaining input for the next bit. We take this remaining bit. We run it on the second parser. We test the second parser results. If it's a failure, we return. If it's a success, we now know that we have these two pieces that we need to combine. So we create a combined value, which is the pair. We call that a success. And we also, we've now got the remaining thing after the two parsers. So it's the second remaining stream. And then we finally take this inner function and we wrap it up in a parser and return it. So that is a really fundamental combinator written in 15 lines of code. So that's just important that we have to keep track of which is the remaining code. There's the combined value and there's the inner function that gets wrapped up again. The or else is kind of similar. One of the first parser, if it succeeds, we're done. If it fails, we take the original input, which hasn't changed. And we run the second parser on that input. And if the second parser succeeds, we return. And if the second parser fails, we return anyway. So either way, we return the result of the second parser. And finally, the map, similar kind of thing. We run the parser. If it succeeds, we take the parsed value and we run it through this function to transform it into something else. And if it fails, we just give up. So this is where these funny operators come from because we don't really write and then and or else. It's quite nice to use infix operators. So in the F sharp parsing world, we use the dot angle brackets, angle brackets dot to mean the and then. And the dots are important. We'll see why the dots are important in a minute. The or else is a vertical bar. And the map is a vertical bar with double angle brackets. So when you see these symbols, that's where they are. And then or else and map. So let me actually give you a demo. Oops. R is a two. There we go. There's the and then code. There is the infix version of and then. There's the or else code. There's the infix version of or else. There's the map codes and there is the infix version of map. All right. So here's some little parses. A then B. Right. So we take a and then we try and get B and we want both of them. And if we try and run it on ABC, we get a success. We return a pair and the remaining string is C. If we run it on ZBC, the A is going to fail. So it's saying I'm expecting an A and I got a Z. Now if we run it on a different string, the first the A is going to match, but the Z is going to, the B is now going to fail and you actually get the nice own message that I was expecting a B and I got a Z. So you can actually tell you where it actually knows about the different parses and it keeps track of where you are in the string. So that's not bad. Here's the or else thing, very similar. So it can match an A, Z, Z, that works. You can also match a B, Z, Z, that works. And then you can start combining them. You can say it's a B or a C and then it's an A or L and then a B or a C. And here is the map in action. So I'm going to take an A and a B and then I'm going to map it. I'm going to take this pair, which is a pair of characters. I'm going to turn each character into a string and then I'm going to add the two strings together. And when I run that, I now get a string back. Well, in a pair of characters, I get a string with two characters in it. So there we go. And here's, let's do an integer one. So I'm matching these two characters one and two. Then I've got a pair of characters. I'm going to turn them into strings and then I'm going to turn the resulting string into an int. So this should actually be an int like this. And let's see if it is. There it is. It returns an int. You can actually see it says it's a result which returns an int down here. All right. So that's it for the basic combinators. So already we've actually done quite a lot. We've got a basic parser. We've got a couple of basic combinators. And now we can really go to town and start combining these basic ones in more complex ways. So let's look at some of the ways you can do that. There's a function in functional programming languages. It's reduce, the reduce function. And what it does is it basically, given a list of things or a sequence of things, it takes some sort of operator and it sticks it between every element in the list. So here is a one, two, three list with three elements. And it's going to be reduced by the plus symbol. And that's exactly the same as writing one plus two plus three. That's what reduce does. Now if we do with parsers, here we have a list of parsers. So parsers are things and we can put them in a list. So here's a list of three different parsers and we're going to reduce them using the and then operator. So that's exactly the same as writing parser character a and then parser character b and then parser character c. Or we can use the or else operator and combine them that way. So that's the same as parser character a or else, parser character b or else, parser character c. So that's quite nice. And this vertical bar with the thing, that's the f sharp pipe operator. If you're not familiar with it, it's just kind of like the Unix pipe. It takes the one thing on the left hand side and it feeds it into the function on the next side. OK. So here's our first kind of compound combinator choice. You give me a list of parsers and I will make a single parser that matches any of them. So I'm just doing list reduce. It's very nice. But then we can take that and build on that. So let's say we have a list of characters we want to match. We take that list of characters. We run the normal map which is built in and we map each of those characters to a parser for that character. So we're mapping using pchar. So now we have a list of parsers and then we run choice and now we have a single parser which matches any of the characters. And here's a real example. Let's say I want to parse any lowercase character. I just say any of and I give it a list of characters that I'm looking for. Or let's say I want to parse any digits. I just say here's the list of characters 0 to 9 and any of those things will match. So already you can see I've built up beginning to build up some complex stuff just for some basic things. Another important combinator is the sequencing thing. So I have a list of parsers and I want to do them in sequence. And the code for this is a little more tricky. And first I'm going to write a helper function that given a pair of parsers it takes the out of the pair and it basically lists and cats them together. So I have a list of parsers. I match each parser to a list singleton and then I use this reduce with this helper function. Don't worry about how this works. The point is that in six lines of code I've got quite a powerful combinator. And I can use that combinator now to make a parser that matches a string. So a string is basically a sequence of characters and I want to match each of those characters in turn. So if I'm matching the string true, for example, the string literal true, TRUE, I want to match each of these and if any of them doesn't match then the parser fails. So I take each character in the string, I map it to a parser, I use sequence to turn it into a parser which returns a list of characters, I then convert that into an array of characters and then I convert the array of characters back into a string and now I have a parser for strings. And like I say, the code, you know, I wouldn't worry about the code really understanding the code. But this is like a few lines of code, once we've got the basic combinators done, the other combinators become very simple to write. So there's pars lowercase, there's parser digit. So if I run the lowercase parser it says yes, I succeeded and I found an A. If I parse a lowercase and it's the first letter, it's uppercase A, it's a failure, I was expecting a Z. It's a failure, I was expecting a Z because it's expecting any of these characters and it's the last one, it gave up on the A, it gave up on the B when it got to the Z, when it gave up that was the error. So I'm expecting a Z. That's a kind of ugly error message and we will deal with that shortly. Similarly with parsing a digit, let's move down to parsing a string. So I want to parse the string, A, B, C, if I parse in a proper string, that's successful, it matches A, B, C and the remaining string is D, E. If I put a bar in the second position, it knows that it's expecting a B there and got a vertical bar. If I put a bar in the third position, again, it knows it's expecting a C there. So the error message is, you know, all pretty good, it could be better but that's not bad considering we've literally written like 30 lines of code so far. All right, so we're not done, more combinators. Many, many more combinators, you can build a huge library of combinators, a combinator library. So the first set of combinators is what I call the more than one combinators. So sometimes you have something where you want to match more than one of a certain thing, more than one comma, more than one character or whatever. So there's a many combinator and a many one combinator which is one or more, zero or more and then optional which is zero or one, that's really common. And here's a good example, white space. A white space character is any of a space or a tab or a new line and then white space in general is one or more white space character. So there we now have a white space parser and we can then combine that white space parser with other parsers. Another one is what I call the throwing away combinators. We saw this one with a dot on both sides, that's the and then combinator. Sometimes you want to parse something and then ignore what you've parsed, you just want to match it, things like double quotes in a quoted string or list literals or something you want to match it and then throw it away. So in this one, you put the dot on the left hand side, that means you keep the left hand side and you throw away the right hand side. In the second one, you keep the right hand side and you throw away the left hand side. And another useful one is a between. So you have three parsers and you parse the first thing but you throw it away, you keep the second one and you throw away the third one and again that's something like a double quoted string. So let's look at that. There's a double quote, parse a single quote and a quoted int basically says between a double quote and parse an int and a double quote. So we'll throw the double quotes away and just return you the int that is parsed. There you go, that's another one and separators is another one, really, really common, a comma separated list or a semicolon separated list or something, same kind of thing, one or more, zero or more. There's a comma, parser comma, parser digits and then parsing one or more digits in a list is digits separated by commas and you have to have at least one of those. So I can demo that and here we go, some more. You can see this is a little bit, some of this code is a bit more complicated. The overall parser library is about 500 lines altogether. So I can show you the full thing later on. All right, so there's digits. So define a digit and then digits is one or more digit and then an integer is basically one or more digits and then we're going to convert, we find the digits and we're going to convert this using map into an int somehow with a little helper function and then when we run that, it finds the one but when we have two digits at the beginning, it picks up both digits and when we have three digits at the beginning, it picks up three digits. So it is picking up, it will basically pick up as many digits as you have before it hits a non-digit and similarly here's a list, same kind of thing. So this is a comma separated list and it returns, I didn't pass it in properly. So you can see it's returning a list of digits that it found and if I change this to one, two, three, it successfully picked up the new one. There you go, that's that. So I know I'm whizzing through all this stuff but like I say, it's really the concepts of how you build these things together. All right, now we're doing for time here. Good. So what can we do now to improve the parser? The first thing we can do is name the parsers because as you saw in one of those cases, when it gave an error message, it said I was looking for a Z or I was looking for nine and what you're really doing is you're really looking for a digit and because it doesn't know that it's a digit, it will just give you the last thing it was looking for which is kind of unhelpful. So what you can do is you can take this parser object because it's an object now, you can just add a property which is a name property and in this case, we'll give it a name, we'll give it digit and so when we use it like this, again, as always, we'll create some sort of cryptic operator to make our lives harder to hide it from other people but in this case, it's the question mark. So this is what the code looks like when you run it without a label. It's trying to parse the digits. The digit is defined as zero up to nine. When it can't find a nine, it'll say I can't find a nine which is really an unhelpful message. If you take that same parser and you give it this label digit, when it fails now, it will now say I was trying to parse a digit. I couldn't find a digit and that, again, is much more helpful. And the other thing you can do, obviously, which is really nice is to give you the line number and the column number where the parsing failed. If you have a large file, obviously, you don't want to say I was expecting a comma here and doesn't tell you where it's expecting a comma. That's not at all helpful. So again, we just have to change our inputs, take the input object and so just having a stream of characters, we have a stream of characters along with the current line and the current column. And then every time we parse a character, we increment the column number and the line number. And we can write a nice little error handling. The message has become much nicer. So here we're trying to parse an integer. The minus is okay. The z is not a valid integer. And so it says column one is an error. And we can put the even little carrot saying here you are in the line. Here's a float. This z is wrong. So on column four is an error. And again, we can write the error line and we can also write the little carrot telling you where it is. So that makes the error handling much more friendly. So this is not very hard to do. I'm not going to show you the code because this is where it does get a little bit ugly, but it's not that hard. You can see we obviously had to do it. Okay, so building a JSON parser. I'm going to use the JSON spec at JSON.org. And that has lots of pretty pictures. And this is the first picture, which is that the JSON value is one of the following things. It's a string or it's a number or it's an object or it's an array or it's true or false or it's null. So how can we represent this in F sharp? Well, in F sharp, we can use a choice type. So here's our choice type. A JSON value is either a string with a string inside it, a number with a float inside it, an object which is a dictionary of key value pairs, where the keys are strings and the values are other JSON values. An array is just a list of JSON values. A boolean is just a boolean and a null has no value at all. So that's a type that represents a JSON value. All right, let's start parsing it. So let's start with some easy ones. The true and the false and the null, these are just littles. So we can just write, we've already got a thing that parses string littles already. So that's easy. Let's start with null. Yet another little helper operator and more cryptic symbols. And so I apologize for these cryptic symbols, but this is part of the reason why these libraries look complicated because they have all these obscure symbols. You just have to get your head around them. Make sure you just have the reference sheet on hand. There aren't that many, though. I mean, you've seen five or six of them. And once you get the hang of it, there's not that many you have to memorize. So what this one is going to do is it's going to run the parser, and it's going to take the output, but it's going to ignore the output of the parser, assuming it succeeds, and give you back some result that you specify. And that's very common if you have a string little. So in this case, I'm looking for the string null. If I find it, I don't really care what the characters are because I already know it's a null. So I'm just going to give it the null value, which is this J null is an F sharp type or type constructor. So I don't need to process the contents of the string in any way. So I can just ignore it. So that's how you parse a null. And then I'm going to give it a label so that when it fails, rather than saying, I couldn't find an L, it's going to say, I couldn't find a null, which is a much nicer message. Similarly for booleans, I have a parser for the true literal, and then I'm going to map it to the boolean value called true. I have a parser for the false literal, and I'm going to map it to the false. And then my boolean parser is just a choice between the true parser and the false parser. So there it is. I'm just using the choice, the or else operator. And then I'm going to give it a label boolean so that if it errors out, I get a nice error message. All right. What about strings? This is now it starts getting a bit more complicated. So a JSON string can be any Unicode character, blah, blah, blah, or it can be various escaped characters or it can be a hexadecimal. So let's start with this one. I'm going to break it into components. So one of the nice things about parser combinators like this is you can break the task into small pieces and build the bigger one from the smaller ones. So I'm going to just work on the individual subsections and then combine them later on. So the first subsection I'm going to work on is this one that says any character other than double quote or backslash or control characters. And I'm going to call this the unescaped character parser. And here is the code. Unescaped character is something that satisfies a character parser. So satisfies the one you haven't seen yet. It basically says, give me a function which is a predicate for characters. And if it matches that character, it's OK. And if it doesn't match the character, it fails. So in this case, it will work as long as the character is not a backslash and is not a double quote. And I'm going to forget about control characters for now. So if it satisfies that condition, which is any other character, that parser will succeed. We will get an unescaped character. The next one, I'm going to call this set the escaped characters. There's a lot of them. There's like eight of them. So what I'm going to do is I'm going to create a list of all the eight possible things in the list. And for each thing, I'm going to say, here's the string that I'm looking for. And if I find that string, here is the output that I'm going to return. So unfortunately, we've got the escapes. So in this case, I'm looking for a backslash followed by a double quote. And I have to escape them both. And if I find it, I'm going to return the quote character. If I get a backslash followed by a backslash, that's going to return a backslash. And everything's doubled up. If I have a backslash followed by a forward slash, that's going to return a forward slash. A backslash followed by a B is going to turn the backspace character and so on and so forth. And then I take all these pairs. And for each pair, I'm going to convert it into a parser. And the parser is going to take the first item in the pair, the string to match. And that's where it says P string to match at the bottom in red. And if I find that thing, I'm going to return the result, which is the second part of the pair. So I can process basically a whole list of items in one go. And this is one of the nice things about using a programming language to write your parsers in is that this is standard F-sharp code. This is nothing special about the parser library. So if I was using C-sharp, I could use link, for example. This would be the equivalent of, instead of using list map, I'd use dot select. And that would give me the same thing. Now I have a list of parsers. No matter how I combine a list of parsers into one parser, I use choice. And so I've combined all my parsers into a single parser. All right. And then I'm going to label, make the error messages nicer. And this final one is the unicode character. I'm not going to show you the code for that, but basically you define a parser for an individual unicode character. And you have four of those parsers in a row, starting with a U. And that's how you define that. And now I have my three sub pieces. And I want to combine them together. So the string is basically a double quote character, followed by zero or more of these characters, followed by another double quote character. How do I define that? Here's my quote character. So it's a pchar of a quote. And I'm going to give it a label. So it says I'm looking for a quote. The JSON character is either an unscaped character or an escaped character or a unicode character. And then the main parser for that is going to be a quote character, followed by zero or more JSON characters, followed by another quote character. And I'm going to throw away the quote character. So you can see the dot is on the inside. So this is basically the same as the between. OK. So the important thing is that I can build up more complex parsers by combining the simpler parsers. So it's tedious to write a parser. I mean, the JSON spec, you know, it's not that hard, but it's kind of tedious. You have quite complicated little things. But if you just follow, you can just follow the railway diagrams and write the code. Pretty much corresponds exactly to the railway diagram. It's one of the really nice things about this style of parsing. And then a proper JSON string is a quoted string. And then I have to map it into one of this J string objects, which is one of those cases in the main type we had, and give it a nice name. All right. Numbers. OK. This is a bit more intimidating. So break it into smaller pieces. There's a sign, which is optional. There's what I'm calling the integer part, which is either a zero or digits one to nine, followed by zero or more normal digits. OK. So let's look at that. So there's the optional sign. I match the hyphen, and I say it's optional. I can be zero or one of these hyphens. The zero part is just matching the string zero. The digits one to nine is basically any parser that satisfies where the character is a digit, but the character is not zero. The normal digits are anything where it satisfies where the character is a digit. And I'm using the char's digit rather than zero to nine, just in case there are some unicode digits that are not an ASCII digit. I'm not an expert in unicode, but almost certainly there's some weird numbers out there somewhere. All right. So non-zero integer, we said is a one to nine digit, followed by zero or more normal digits. So I'm going to combine them using that and then. And then I want to turn that into something useful. So I map that. I've got a little pair. I turn the first one into a string, because the first one's a character. The second part is going to be a string, and I combine them into a new string. So I've now got a string there. And then finally, the entire integer part is either the zero bit or the non-zero bit. So it's tedious. And it follows the design. You can see it follows the railway. It's not hard to write. It's just boring to write. Same thing for the fractional bits. A fraction is a decimal point followed by one or more digits. That's easy to write. There's a decimal point. It's a decimal point followed by one or more digits. The exponent part is either a lowercase or an uppercase, followed by an optional sign, followed by one or more digits, blah, blah, blah. Same kind of thing. So it kind of gets boring after while writing all this stuff. But there you go. The good thing, by the way, is that when you do write this, it will do type checking for you. So you won't be able to mess it up too much. If you try and combine something that's not the right type, it won't compile. So you're pretty much guaranteed that at least whatever you type in is going to work. You might not parse exactly what you want, but it won't crash with some weird error. So now we have our four different parts. We're going to combine them. So we say it's an optional sign combined with an integer part, with an optional fractional part, an optional exponent part. We're going to take this whole thing and convert it into a J number and give it a label. And so on and so on and so on. I'm not going to go through the whole JSON parser, but you get the idea. And so finally, we have all these parsers for these individual pieces now. We want to combine them into a parser that parses any JSON function. And here's the code. To parse a JSON value, you have a choice of something that parses a null, something that parses a billion, something that parses a number, something that parses a string, something that parses an array, and something that parses an object. So the code really, really matches the parsing diagrams, which is very nice. So it's pretty easy to write this kind of stuff. All right, so let's actually see the demo. Right. So before we just get, I think I've got enough time to show you one thing here. Yeah. I just want to show you, here's the entire parser library, including all the line number handling stuff. Quite a lot of, you know, it's a little bit complicated. But even with all this line number handling and some utility stuff, the code itself altogether is there's parsing an integer, there's parsing a float, 496 lines. Okay, so less than 500 lines for the entire parsing library, which includes utility things for parsing a float, parsing integer, parsing spaces, parsing white space, parsing a string, all that stuff, all these parsers in less than 500 lines. The JSON parser itself. Here we go. So here's the JSON value. Like I said, this is the real code. There's the escaped character code. There's the number code. TDS, TDS, TDS. But the whole JSON thing is 295 lines. So let's actually test that. Will it come up? Yes, it does come up. So here's an example of the null. So there's the J null parser, and it succeeds and it returns a J null. If I pass in null P, rather than just saying I was expecting an L, it gives a nice message, column three, error parsing null, unexpected P at that position. So that's a much more helpful error message. Finally parsing a Boolean, if I try and parse trucks, it says I'm trying to parse a Boolean. You gave me trucks. There's an unexpected, oh, unexpected T. I guess it gives up on the, it can't find the true, so it tries to find the false. It can't find the false. So that's what that error message is. It's backtracking. Here's the string stuff. Let's go down to a real piece of JSON code. So here is a real JSON fragment. And you can see it's got a string. It's got a Boolean. It's got a birthday, which has another object with three properties in it. And then the favorite colors is a list of strings. So if I parse this, show this up here. You can see it successfully parsed it. It returned a JSON object. The JSON object contained a map, a dictionary. Inside the dictionary there's a property called birthday, which in turn is a map, which in turn has the day property, which is a J number, a month, which is a J number, a year, which is a J number. Favorite colors is a JSON array, which contains a JSON string, and another JSON string, and so on and so forth. If I change this to a JSON number, let's see if this works. It should say it's a JSON number. And there's the JSON array. And the first thing is a JSON string, and the second thing is a JSON number. Now if I put a bad character in there, like an angle bracket or something, a square bracket, and I try and parse it. Again, line 5, column 0, parsing an object, unexpected square brackets. And here's some of the JSON.org site has some other examples. This is one from their site. So it's quite a complicated one. If I run it, I get the complete thing. I successfully parsed it. It's a map container widget. The widget is a map containing debug and image and so on and so forth. So there's a full JSON parser in 300 lines of code. Not bad, I think. So like I say, the details can get quite complicated, but I think you can see the concept. How you build up the more complicated parsers, some simpler ones. So what have we got, I think, and pretty much done? The most important thing is we're treating functions like objects. So the original parser returned a function. That's a very functional programming kind of thing to do. We treat functions as things in their own right. And in this case, we wrap that function in a type. And once we'd wrapped it in this parser type, we could then manipulate it. Those types, I was calling recipes. People normally call them effects or computations. But the point is you go to these recipes. They don't actually work until you actually give them the input stream. But what you can do is combine them before you actually have the input stream. You can basically do a little programming with these recipes. You can take two recipes and combine them to make a bigger recipe. And that's a very cool thing. So actually kind of programming with types. You're programming with combinators. So you have a little program and then you run the program with the input. And I think hopefully you get the idea of the power of these combinator libraries. We just started with three basic combinators. And from that, we could build a choice and the any of and the sequence and the string and the, you know, all sorts of other ones built from those basic ones. And that's this whole thing of building complex things from smaller things. This is really the essence of composition, the essence of functional programming. And they're very small, but they're very powerful. Like I said, the combinator library is 500 characters, 500 lines, not bad. And with that library, we could actually write a JSON parser in 300 lines. And people using these kinds of things, you can write binary parsers as well. You don't have to write string parsers. So you could write, you know, they wouldn't be particularly efficient. By the way, this code that I've shown you is not at all efficient. It's really an example to get the concepts across. If you want efficiency, I would use a commercial, you know, a more serious library. So for example, in F-sharp, the F parser library is the one to go for. And there are similar libraries for all other languages. So thanks very much. The code will be the parser directory there. If you have any questions, slides and video, if you want to help with F-sharp, F-sharp works consulting, and if you want more about F-sharp itself, go to fsharp.org. Thanks very much. And if you've got any questions, just come and see me afterwards. And don't forget to fill out the thing at the back. Thank you.
Traditionally, writing parsers has been hard, involving arcane tools like Lex and Yacc.An alternative approach is to write a parser in your favourite programming language, using a "parser combinator" library and concepts no more complicated than regular expressions. In this talk, we'll do a deep dive into parser combinators. We'll build a parser combinator library from scratch in F# using functional programming techniques, and then use it to implement a full featured JSON parser.
10.5446/51927 (DOI)
A friendly welcome to a desperate topic. Sabino Milam visited friends in South America and Chile and experienced in a situation that has not been put to media in Europe in that scale. The fight for the people for dignity and justice. I witnessed the uprising in Santiago de Chile and I wish you give a warm welcome. Thank you very much. My name is Sabino Milam and I will talk about Chile. Chile des petos. Chile's awakening has been the slogan of the protests in Chile. To understand what is happening there, we have to take a look at the past. Here we see on the banner of a protestor in Santiago in Plaza Italia which was renamed by the demonstrators in Plaza della Dignidad, place of dignity. You see the picture of Salvador Allende and there is written Venceremos, we will win. In 1970, Salvador Allende was elected as the first socialist president in Chile. Allende and his unity of political parties, the Unidad Popular, planned to reform the country. They want to nationalize the carpentry which belongs to American companies, nationalize the banks and do a land reform. The USA sees Allende as a threat and fears the arise of a second Cuba. Then Secretary of State Kissinger says, I don't see why we stand by and watch a country go communist because of the irresponsibility of its own people. The Nixon administration provides the right wing elite in Chile with a lot of money and help by the CIA. On September 11, 1973, the military led by General Augusto Pinochet launches a coup d'etat against Allende and his Unidad Popular. The military bombs the presidential palace, La Moneda, and Salvador Allende, who refuses to resign as elected president, died in the flames. By the way, I highly recommend everybody to listen or to read the last speech of Allende, which is a great speech and also very political. The soccer stadium in Santiago becomes a concentration camp of torture and death. The very beloved singer, Victor Chara, is murdered there. Thousands of people are killed by the military. Others disappear. The so-called Los Des Aparecidos, prisoners whose fates are unknown. According to the concept of the American economist Milton Friedman and with the help of a group of his students, Chilean students, the so-called Chicago boys, the dictator Augusto Pinochet begins a radical privatization of the country. More or less everything is privatized. And Jaime Guzman, the dictator's intellectual right-wing consultant, changes the constitution in a way that the concept of radical neoliberalism is deeply embedded in it. Though Chile is celebrated as an economic miracle, the truth is that very few people get richer and richer while the middle class gets poorer and become slaves of the banks. In 1988, a referendum finishes the dictatorship of Pinochet and the transition to a democracy takes place. But the neoliberal constitution is not changed and it is in use until today. Today in Chile, everything has been privatized, even the water. This affects the health system, the education system and the pensions. Public schools are in bad shape, such as the public health system. People die being on waiting lists for essential surgeries. Others are drowning in debt to try to pay for medical costs. The pension fund AFP is privatized and nearly all Chileans are forced to pay into it. Organizations are the military and the carabineros de Chile, the police. These organizations have their own much better systems. On October 18, 2019, a group of young high school kids jump over the turnstile in the metro station in Santiago. It is an act of civil disobedience and a protest against a fair increase of 30 pesos. But the protest does not limit itself to the fair increase. It spreads quickly to include the entire neoliberal system as the root of Chile's extreme social inequality. The demonstrations begin with the slogan, no son 30 pesos, son 30 años. It is not 30 pesos, it is 30 years. All of a sudden, the whole country seemed to be protesting in the streets with cazarolas that is beating pots and pans and honking car horns. You have to imagine that as an incredible noise. Santiago was really, really noisy with these cazarolas. Here we see the Plaza de la Dignidad, place of dignity, the new name for Plaza Italia. Plaza de la Dignidad is located in the center of Santiago. And here you see the monument of General Bacchidano, which the demonstrators tried to tear down and they were not capable of that. So they changed it daily. This photo shows also the slogan of the protesters. Inuncia Pinera, whose nickname is Piranha, is the right wing president of the country. And his eye is falling out here on the banner. And that refers to the brutality of the police who shoots out the eyes of demonstrators. The movement gets bigger and bigger. And the president, Sebastian Pinera, holds a speech in which he declares war against his own people. As Tamos and Guerra, we are at war. And he sends out the military in the streets. He probably thought this would be a good idea because it worked so well in 1973. And my guess is he thought this would intimidate the Chilean people. Well, the opposite happened. On 25th of October, nearly two million people were peacefully demonstrating in the streets of Santiago. With the slogan, no tenemos miedo, we have no fear. And a symbol of the protest became the Mapuche flag, which you see here on this picture. The Mapuche flag has this sun in the middle. And the Mapuche are indigenous people who live in the south of Chile and are persecuted by the Chilean government according to anti-terrorist laws. And the conflict is about land and, I mean, who owns the land and who can use the natural resources. Here we see demonstrators playing at the monument of General Bacchidano in Santiago. This monument became the epic center of the protests. And there were musical bands playing. And this one played songs by Victor Hara, Victor Hara, the one who was the singer who was murdered in the soccer stadium. And his song, El Direcho de Vivier and Paz, The Right to Live in Peace, became one of the hymns of the movement. Here we see a demonstrator. You see he has the Mapuche flag and wears glasses as protection against the tear gas. And he wears a mask which says, yo apuebo, I consent. And this refers to the demand of the demonstrators for a new constitution. Because the new constitution and the creation of a new constitution is a key element for the demonstrators. And with the creation of a new constitution, they mean a constitution which is free from neoliberal elements. Many Chileans demand a basic change in the system. The right for living life in dignity. This old man has a sign which says, gracias valiente juventud. Thank you courageous young people. At the end of a long working career, people have ridiculously low pensions. Very often around 200,000 to 250,000 pesos. That is something about 200 or 300 euros. And you have to know, Chile has very high life costs. So it is not much cheaper than Germany. I saw the slogan of a demonstrator who said, I have more fear of my retirement than I have of the cops. As a consequence, Chile has a high rate of old people who commit suicide because they can't make a living. And you see also many old people in the streets who sell little things. Just forget by. Very soon, the artists joined the protest. Here we see street art and this shows the very popular singer, Mon La Fert. Mon La Fert is well known in Latin America. And this picture refers to a situation in November 2019 when she won at the Latin American Grammys award. And she walked bare breasted on the red carpet. And on her naked breast is written, and Chile, tortura, viola, and martin. They torture, rape and kill in Chile. And this street art poster is a reference to that. The whole protest has been organized through social media, such as Facebook, Twitter and Instagram. There are hashtags on Twitter, for example, like Chile desperto, Chile's awakening, or renuncia peniera, resigned peniera. And they are filled with information about the protests. The official media in Chile, such as the TV, does not report about the movement. If they report at all, it surely is in a criminalizing and discriminating way. The answer to the protest from President Pinera is, first, his answer was to send out the military in the street, which I already told you, and then he sends out the police. The Caramineros de Chile. The Caramineros de Chile is a highly militarized police force. Here you see them using tear gas. The tear gas is very often employed with chemicals. And when I was there in November and in February and March, it was often mixed with caustic sodium hydroxid. That is the stuff. If you get hit by the tear gas, your eyes burn like crazy. You start coughing. Sometimes you can't breathe anymore. And your skin gets burned. By the way, this picture was taken more or less right in front of my hostel. My hostel was located in the so-called zona zero, the ground zone, where nearly daily protests were happening and where still protests are happening. And here they are, the Caramineros de Chile, nicknamed Pacos. They are hated, hated, hated by the people. I saw situations where there was, for example, not at a protest, but a normal street situation. And a police car was driving by and all of a sudden the whole street started screaming as the scene knows murderers. And so far more than 40 people died in these protests. More than 460 people have lost one or both eyes. They fire bullets directly at the faces of the demonstrators from a very short distance. Well known in Chile is the case of the then, a year ago, 21-year-old psychology student Gustavo Gartica, whose eyes were shot out in November 2019. His bad luck was that he's tall and so he was a target for the cops. And he was, when one cop shot first one and then the other eye out, he was taking pictures. The International is right now doing a campaign asking justice for Gustavo Gartica. Another case which happened recently was a 16-year-old kid, a demonstrator that happened in the beginning of October 2020. And this kid was violently pushed by a cop head first seven meters down from the Pion Onno Bridge into the Mapacho River. And he barely survived. Here you see a night scene. It's the green light laser pointers. The demonstrators try to block the view by using the green laser pointers and trying to prevent the cops from firing bullets. At night, the attacks of the police were much more brutal and aggressive. And my guess is they thought that the night, the dark night would protect them because the demonstrators always make films. They film police. They film police attacks. They film whenever police is there, you can see people standing with their cell phones and filming that. And so many of the abuses of the state of the police are well documented. I think it's interesting to remember that right now in France, President Macron wants to prohibit the filming of police actions during demonstrations. Street art in Santiago. This art refers to the violence of the cops. This is a collage. The tacos come from a photo. The woman is taken from, I think it was a French artist, I'm not sure. And this refers also to cases of sexual violence and rape at police centers. Here we see a neighbor lady who helps the protesters and provides them with bio carbon art spray for the throat and the eyes. And there was a little line, people were standing in line for getting her help. This picture, street art, shows what the Chileans call a social explosion in one picture. You see in the left corner, up in the left corner, you see the people who lost their eyes, the bleeding eyes. In the right corner, you see three blindfolded women dancing. These women are from the performance, un violador and to Camino, a rapist in your way. This is a performance by the group, Las Tesis from Valparaíso. And they created this performance as an accuse against sexual violence and the whole patriarchal system. And within a few weeks, it became a viral hit worldwide. And all over the world, women did and do this performance. For instance, in New York, they did it in front of the Harvey Weinstein trial. You see also in the picture, you see the fear of the people, you see a paco firing and you see a black dog in the middle. This black dog is called El Negro Matapacos, the black cop killer. And this was a real street dog who lived in Santiago and accompanied, I think it was in 2011, the then happening protest of college kids who were fighting for a better education. And this stray dog was always fighting with them in the front line and attacking the police. And of course, the kids loved him for that and named him El Negro Matapacos, the black cop killer. This dog died in 2017, a natural death and has become one of the great icons of the Istaí Yiddos social. You see pictures of him everywhere and many Chileans identify with him. A reaction to the repression of the state was the creation of the Primera línea, the front liners. And this is street art and shows a front liner couple. You see the man with a gas mask and Primera línea is written on his breast and the woman is holding like she is holding a drink, but it's a molotov cocktail. And the Primera línea, the front liners, fight directly against the police and stand in their way and enable the other protesters to make their demonstration. So you have to have a big peaceful demonstration in Plaza de la Dignidad with live music, with a theater, with carnival and there's a picacho and there's a clown and there's a crocodile. And from time to time, frogs of tear gas are coming and that's the Primera línea fighting against the cops and makes the demonstration happen. The other important volunteers are the Brigadas de la Salud that are volunteers from the health care sector who provide first aid to the endured demonstrators. The women play a key role in the protest and I personally think this would not be possible without the women. Here we see a young girl and she's obviously very mad and she is masked with a green bandana and this green bandana is the symbol for the fight for the right of abortion. Abortion is in most Latin American countries illegal and the women are fighting for illegal and a safe abortion. When the poster is written, el estado opresor es un macho violador. The repressive state is a macho rapist and that is a line directly taken from lastesis on violador and to camino. Here we see a young college girl in her school uniform and she got badly hit by a bullet. Her blood is dripping down her leg. There is written assegir luchando to continue the fight. And when I was there in November in Santiago, the carabineros de Chile made an announcement that they would only use rubber bullets, which of course nobody believed. And I think it was the Universidad de Santiago, they made a research and the result was that the bullets the carabineros de Chile use are made out 80% of metal and only 20% out of rubber. This picture was taken on Plaza de la Dignidad on a Sunday afternoon in March and it was a car rally that showed up demanding a new constitution. And there were lots of cars with flags and honking and they were driving around the plaza and the young lady is dressed in the Mapuche flag. And street art again, this is the classical motif of the Madonna in a very unlikely way. This Madonna is N. Capucada, she's masked and by the way that was I think it was in November when the government did forbid it. And this Madonna also has a baby in her arm, but this baby is a Paco baby. You see, it wears a little, it wears a Paco uniform. And instead of nursing it, she puts spikes in it. On her left hand is written a cop, all cops are bastards. On her right arm, you can see the Necomata Pacos, the black cop killer. And yeah, personally, this is one of my favorite street art posters. This young lady was out in the streets in front of my hostel, fighting in the Primera Linea. I took the picture in the yard of the hostel and I had been out before, like she was, I took pictures outside when I was badly hit by tear gas and couldn't see anything more, was coughing like crazy and stumbled to the safety of my hostel. My hostel was surrounded by a big metallic fence and I knew that I would be safe there and she was following me. And in the yard, we both got treated with bio-carbonate spray. And after we both got better, we prepared for going out again. And that is when I took the picture. She wears on her right arm a green bandana, the symbol for the fight for the right of abortion. And she is in Capuchada and she holds a rock in her hand. And here he is, a Necomata Pacos, that's how he looked, the black cop killer. One of the biggest icons of the movement and a symbol for freedom and fighting. This is a movement without leadership. This movement organizes itself through social media. So when you go to a demonstration, nobody holds a speech, which I personally found very refreshing. A movement without leadership, but it's also hard to deal with for the government. Because you can corrupt leaders, you can bribe them, you can kill them. But what do you do with a movement which doesn't have any leaders? Only icons like a dog. So here he is, a Necomata Pacos. This is my last picture for today. And here you see the stray dogs in Santiago. And in the middle, of course, a Necomata Pacos, the black cop killer. The writing says, you are Puebo, I consent, which refers to the referendum about the change to a new constitution. That referendum should have taken place in April 2020, but was postponed because of Corona. On October 25th in 2020, 78% of the Chilean people voted for a change of the constitution. And nearly the same number voted that this change should be made by elected citizens and not by politicians. This was a great victory. They say, Chile was the cradle of neoliberalism and it will be also its grave. Thank you very much for listening. Thank you to the CCC team for giving me the possibility to talk. Thank you to my friend Jali here who helps me with the technique. And thanks to my husband who was deeply scared to let me go and to let me travel alone to Chile, but let me go. And a big, big, big gracias Santiago, Manuel, who walked with me to many demonstrations despite his bad knee. And to Claudia who is giving everything for teaching me the difficulties and problems of Spanish grammar. Thank you very much. I hope we have some time for some questions. Well, it seems that you're talking stunt the audience. No questions. Which I understand. So far, just occurred one question, the name of the dog, but you mentioned it after. So Negro Macapaco will be remembered. What I would find interesting is to hear a little bit about your, you, how did you come to visit South America? You didn't go there out of, out of these that you are a riot terrorist, a tourist. No, no, no, no, no, no, no, no. No, I had been to Chile before several times and I had taken their Spanish courses. I wanted to study Spanish. And I wanted to see South America. And if you travel in South America, you have to speak Spanish without Spanish. I think it's not possible. And I was booked for November 4 for four weeks in Santiago in 2019 when the protest, what they call El Estajito Social, the social explosion began. And I was not sure what to do because I had, I had read enough about the Chilean police to know, yeah, that they are capable of everything. So my Chilean friends said, don't come. It's way too dangerous. And why, but I was curious. I have to say I was curious. I wanted to see what was going on. But it was also clear it would be a historical moment. What I didn't do and what was probably smart, I didn't bring my good camera. I have a good camera. And I had seen before enough films how the police especially attacked people with cameras because they thought they were journalists. Yeah. And so I thought it's too dangerous to show up there with a camera. When I, what I didn't, what was not clear to me was that it was really a war zone. When I arrived after an 18-long hour flight from Santiago, in Santiago, when the taxi drivers heard where I wanted to go, they said immediately, that's in the Zona Zero. We will not bring you. So it took me half an hour to persuade a taxi driver to bring me to my hostel. That's how it started. And in the hostel, I felt safe. But being outside was difficult. It was difficult. Several times, police came to me and said, stop taking pictures. And of course, in that moment, those moments, I stopped taking pictures. But yeah, it was always, it was difficult. But on the other side, the whole city of Santiago, all the walls were sprayed with graffiti and all the walls were telling the story of Chile. And I have never seen such a thing in my life. So much art. I've never seen, it was a little bit like being in a museum. Yeah, in a museum of contemporary art. There was so much life and so many artists. Yeah, so that was the other side. But of course, it was difficult. And the thing that annoys me a little bit is that I think that there are not, there are very few. Why is it going away? There are very few good documentaries about Chile. There are very few good articles about Chile. For those who have access to the media take of the German-French TV art, I highly recommend you look in that media take. The languages are German and French. And there you find some very good documentaries about Chile. So we have one question just come in. Somebody would be interested in what the current situation or momentum of the demonstrations are. Well, the current situation is that one thing is that the movement tries to get out their prisoners. There are many people who were arrested and it's not always very logical who got arrested or not. Yeah, I'm not sure what's the right amount now, but many. I think we're talking about a thousand people, two thousand, I don't know, many. And that is one of the demands. The other demand is justice because the government so far did not take any responsibility for the action of its police. So these are the demands. In April will be a new referendum, another referendum. And then the Chilean people have to vote. They are called constituents that are the citizens who will write the constitution. So this whole process is going on. It is not over. Yeah, the referendum was 78 percent voted for a new constitution was a big victory. But this is not the end. There is a very small, rich elite who I'm not sure whether they understand the situation in Chile. The ex-Health Minister in the Corona crisis, he said, I didn't have any idea how much poverty is in Chile. And I think that was a very honest sentence. So the middle class who is so much indebted, the middle class who are slaves of the banks, I think that they are very much through with the system and they want a change. So it's widespread poverty. That's the driving element. And it's the middle class who is the so-called lumpen, the kids from the ghetto that are people where nobody cares. But the middle class is the other people who get poorer and poorer. And the interesting thing is that on the streets are the kids from the ghetto and the middle class together. Otherwise, you would not have nearly two million people protesting like in October 2019 in the streets. How many Chileans are there? In Santiago, I think it's seven, eight millions. And in the whole country, 16 million, maybe, 16. What I didn't say, but it's clear, the protests are also in other streets. It's not only Santiago, but also in other cities. There's another question from the IRC. Do you feel that the protests could spread in neighboring countries? In other countries? Yeah, I think so. I think that the protest in Peru, where they recently got rid of their president, that that was influenced by Chile's protests. Yes, absolutely. Absolutely. Other Latin American countries. Yeah. Encouraging. Yeah, yeah, yeah, encouraging. And I think, and that's also, for me, was an interesting thing to see that in 1970, when Salvador Allende and his Unidad Popular, they wanted really another system, a new system, a socialist system, but a free socialist system, not a Soviet satellite system. And this, it seemed like the dictatorship of Pinochet had erased all this, all these ideas they had. And in this estallito social, you see that there's a cultural memory, that everything is there. And the fact that songs like El Direcho de Vivir and Paz or this song El Pueblo Unido Jamás Serra Vencido, the United People Will Never Be Defeated, which is a song from the times of the Unidad Popular, that they are so popular now, shows that the cultural memory of the ideas of Allende is still there. And it is also no coincidence that the only politician I saw at Banner's was Salvador Allende. Yeah. And all the other politicians also do not go, with very few exceptions, don't go to the demonstrations and they better do not because the people are mad at them. They don't feel represented by them. Yeah, I think that's a very nice closing quote. Thank you very much for the talk. Thank you. Thank you very much. Thank you.
Since October 2019 the Chilean people have been fighting for social dignity and equality. During the dictatorship of Augusto Pinochet in the 80's nearly everything in Chile was privatized including the health system, education and pensions. As a consequence the people suffer economically, get deeply indebted and become slaves of the banks. The Chilean people are fighting for and demanding a new constitution which is free of neoliberalism. As an eyewitness to the historic events in Santiago of the Chilean uprising beginning in 2019, I participated in many demonstrations, took hundreds of photographs, interviewed many demonstrators and got hit by a lof of teargas. I saw at first hand the violence of the Carabiñeros de Chile, the highly militarized police that President Piñera sent out to attack his own citizens. I saw seriously wounded people in the streets as well as the brave and heroic people who helped them. In my presentation I will show fotos of the movement as well as the fanatstic street art which tells the story of the Chilean revolution.
10.5446/51928 (DOI)
Willkommen zurück hier im Hamburger Chaos Studio. Hier geht es jetzt weiter mit der wichtigen Frage, was vorgesetzte der Markt, Corona und Teammitglieder zusammen gemeinsam haben. Und zwar, sie können allesamt das Team stören. Und dazu spricht jetzt gleich Laura zu uns. Sie hat Unterstützung von ST. Und ja, Laura selber hat Erfahrung im Bereich Personalwesen, ist im Moment unterwegs als Agile Coach und im Bereich der Organisationsentwicklung. Gerade arbeitet sie an ihrer Dissertation und hat trotzdem Zeit gefunden, zusammen mit ST diesen Vortrag zu erstellen. Und sie spricht heute darüber, was man tun kann, um Teams messbar resilient, also widerstandsfähig, gegen Störungen von innen, von außen, aber auch innerhalb des Teams zu machen. Hello everyone, I'm very happy that you're interested in organizational psychology and software teams. I'm Laura and I work as an organizational psychologist. About five years ago, ST and I started working together and I started working with tech teams. That time someone once called me the HR lady, because they were not used to it, that people without a technical background might appear mood in ones and in addition want to help them within their collaboration. So that's why ST and I not only started working together, but also soon started doing research together. And then we submitted this talk because we realized that even more people are interested. So ST and I both work as Agile coaches, but we are different personalities, we have different CVs and a major difference is the aspect of what we are interested in. ST is a lot interested in creating cool things, whatever that is. It might be in the art context, in the tech context, or in a business development context. And most of the time, I'm pretty much interested in how cool things are getting created, how does collaboration work, how do processes work, how is creativity going on, and how do people feel whenever they collaborate. The first question I have is what makes you more upset? Are you more upset because of technical issues or are you getting more upset because of people? People, you collaborate with your colleagues, your customers, your teammates, your boss, whoever that might be. And before we start a little bit deeper, the first question might be what the hell is organizational psychology? We work with a lot of different companies and therefore I want to give you an example how I as an organizational psychologist get to know a new organization. In order to get you in the mood of it, I want to start with three little situations and maybe you can relate to one or another. So the first thing that I mostly get to see when I see a new customer, it is the kitchen because I enter the office, somebody might offer me a coffee and together we will go to the kitchen. And the first thing is what do you see here? Do you see the mess, the garbage, the dishes standing all over the place? Or like me, do you see more something about the culture? Is there maybe a correlation between how the kitchen looks like and how people care for their work or how they document their code or how they deploy things? Maybe there is a little bit about truth in it that there is a relationship between how the offices look and how the work people do look like. The second question that is very helpful to get to know a new organization is how long does it take you to order something very simple like coffee or a pencil? And this question tells you a lot about the leadership, delegation levels, the tolerance of people and how well the processes work. And a third example is how do people solve problems and does everything has to be very shiny and very chic or is it okay to be very creative in solving problems with very cheap solutions but that really tackle the problem that you want to address. So with this picture I will see a lot about process and product innovations. So, what again is organizational psychology? It is often defined as industrial and organizational psychology. Some people also call it occupational psychology or work in organizational psychology. And there is an association called the American Psychological Association. Something like the Kauß Computer Club for Techies, is it the same thing for psychologists. And they defined it as a scientific study that takes care of human behavior in organizations and in general in workplaces. So, we take a look at different things within that. It is not only individuals, its groups, its teams and its organization as a whole. And with all the knowledge that we have, we always want to solve problems that occur during collaboration. So now you might have the questions, what problems do we address within that? On the left hand side you see some examples that are maybe more easily to measure like recruitment or selection processes, training and development, performance measurements or even reward systems. And on the other hand we have aspects that are often more called the fuzzy ones because it is about workplace motivation, quality of work or the structure, how people collaborate and organizations develop or even how customers behave. So maybe the first answer for the question was that you are more upset about people. And the question is why is collaboration so exhausting and why does it make us upset every now and then? So throughout the whole talk I want you to keep three things in mind as a new organizational psychologist. We always work with very complex systems. It is hard to make things measurable and we have different levels that we take a look at. And I want to give you a little bit more about those three aspects that you have some background information. So the first thing is about complexity. And I once read a quote which I observe everywhere. So people are afraid of detail and afraid of complexity. That is a pattern that we observe a lot. And I want to give you an example where we observe it. So we as a people often are getting in complex situations. And we try to oversimplify things in order to have the good feeling of having everything under control. So this is an example where we try to make visible how collaboration might work within our organization. And the power of this structure lies within the hierarchy. The next try is that the power accepted to lie within influence and den knowledge. But in real life there is a difference between formal and informal ways of information flow. And in real life it is pretty much like that. So the value creation, the collaboration, communication mainly works in those ways. Which are not that pretty to visualize. But the first thing is always to accept that it is the case like that. In addition we have several players within our organization. And we have effects not only within our organization but also in the outside world. So the relation between a manager or an employee or two employees is not only those people involved but it is always a little bit more. Because it is really important what organizational culture we have around those relationships. Is it a healthy organization? Is it a very competitive market? Do we have a lot of satisfied or unsatisfied customers? What is the environment that those people work together? When I work with software teams we often have to do with roles like agile coaches or scrum masters, low hierarchies and self organizations, our words that always come up. So therefore we always have roles like an agile coach. And maybe they even have to establish a relation between leaders and employees because we do not have anything like that before. And in addition to this additional roles that help us to collaborate, the organizational culture is important as well because that forms the setting and therefore has an influence on the collaboration itself. So let's take a look a little bit closer on measurability. How do we measure and what do we measure? When we get to know new teams we always ask them a playing question. What do you measure? So it's very unprimed and the first thing that often comes up is we measure our revenues, we know our costs. Maybe we have some measurements of customer satisfaction like a net promoter score or something like that. But then we will continue asking, so what else? Is there any other than the business dimension that you measure and keep an eye on? If we are lucky, then the teams also measure technical key performance indicators or technical dashboards are in place to know what is the deployment time, what is the deployment frequency, maybe things like the meantime to recovery that comes to their mind. Often they even have a dashboard that shows how well they are set up within that. But we will still continue asking because there is another important dimension missing and we do not have an answer to it right now. So we ask for team numbers. But when we ask that we mostly get irritated looks because besides of we know that we still have sometimes of bank holidays in front of us or I know I will visit a conference, there are no more numbers around this team dimension. And we always hear that it's about a gut feeling that I know everything is under control or someone has a problem. That's nothing where we can help us with measurements. So we will get the question, how on earth can I measure team dimensions and how can I even improve them? Psychologists may forgive me, but I will try to make a real easy example to know and compare how we can measure team dimensions. The easy example is, how do you know that it is winter in Hamburg right now? When you take a look out of the window, it might be rainy, you might see a gray sky, you might see people with very warm clothes, the sun goes up very late, goes down very early and there might be a lot of rain. So the fact that we have the season winter is only observable and measurable because we divide it in subdimensions that we can measure. We can measure the hours of sunlight, we can measure the amount of rain, we can see what we need to wear in order to stay warm and in a psychological context we call seasons like winter a construct. That helps us to cluster the observation but then we need to break it down into several dimensions that are operational and then we can measure it. So that is the same thing for the Teamdimensions. We can measure Teamdimensions when we make use of those so-called psychological or hypothetical constructs and in the behavior science we make use of those in order to facilitate the understanding of human behavior. So the constructs are so-called building blocks of scientific Theories. But how do we get closer to the measurement is the next question. We can observe real life problems that we have within our collaboration or situations that occur every now and then. Like we have problems when we deploy or we have a lot of discussions about certain topics. People really need their vacation because they feel very stressed right before every holiday. So those can be observations where we know there might be a problem within the situation which is definitely worth taking a closer look and starting to measure. But how can we do that? If you observe things, take a look in the theory and there might be some helpful tools for your observations and problems. I don't know how to identify with the help of theory how you can operationalize your constructs. You can use self-assessments where people just simply answer a questionnaire. You can for example choose easy things to measure like a mood board with a scale from 1 to 10 and people tell you what their impression is. Or you can measure by using data. For example the amount of holidays that are taken very early in the year and versus very late in the year. How many sick days do we have? How many conference do we visit? How does the collaboration look like? Do we improve it every now and then with certain rituals or do we not take any look on the collaboration itself? And you can also use experiments in a lab or in the field to make things measurable. Then whatever state you have in the beginning define actions, how you plan to tackle your problem that you identified earlier. And then re-measure to see whether the actions that you took bring you closer to the goal or not. So you might be taking some actions, measure again and see whether you've reached your goal and thereby solve the problem that you had within your collaboration. And then repeat. And it doesn't matter where you start with an observation, with a theory that you want to test within your team. It's just about curiosity and things that might help you to improve collaboration. The third thing to keep in mind is we have different levels within our organizations. So organizations are complex systems with different levels and every interaction influences other aspects. So whenever we start working for a company, start working for a team, there's an individual signing a contract and starting to work. Probabily the person will work within a team that will interact with other teams or other employees within the company. No matter how low hierarchies are, there might be a certain form of leadership person that is in charge for the teams and also legally in charge. And in addition, we have the organizational level also shaping and creating an atmosphere and having an influence on all the other players. Before we start with real life examples, there's one aspect that is really important, because we have the feeling that it's getting mixed up a lot of times. Remember the quote of people are afraid of detail and complexity. I have the feeling that we can clear that up right now and maybe you could make use of it within your working days. So remember what Henry says and now we will dig a little bit deeper into complexity. Have you ever heard of the Canadian framework? It is a helpful tool to identify in which situation we are and what challenges we are facing. The Canadian framework was created by Dave Snowden and the word Canadian is Welch for the word Habitat. Habitat already gives you an indication that we can conclude how to behave and how to deal with situations. Here you can see four areas and we will get briefly through all of them. So the first area is that we are in a clear or simple state. So the relationship between a cause and an effect is obvious to everyone who is involved within it. So we simply sense, see what we have, categorize and directly respond. We can make use of best practices because it's very easy to identify what helps us to solve this problem. The next area is the complicated one. That means the relation between a cause and an effect requires analysis first, maybe some expert knowledge is needed. But as soon as it is sensed and analyzed, then we can easily respond to it because we have good practices that are tested and that help us whenever the problem occurs. And whenever the problem occurs is a pretty good hint because when things are complicated, they are repeatedly made. So it is like a habit that comes every now and then. So we have things that are challenges maybe every week or every day or several times a day. And therefore after the analysis, we really know what to do to tackle the problem. Now comes the interesting part because it's about complexity. What does complex mean? The relationship between a cause and an effect can only be perceived and identified in retrospect. So after that event took place, we can probe, then sense and respond. And maybe we are lucky and tackle with our emergent practice that we do. Maybe we tackle the problem, become closer, but it might be not a second time that it appears in the same way that it did the first time. So complexity is about dealing with new things where we do not know what to do. Right now we can only try it out and see in retrospect whether it helped or not. And the fourth situation is chaotic. So there is no relationship between a cause and an effect at the system level. And therefore we need to develop totally new and novel practices. We act, sense and respond. But this is really the biggest challenge. And then we have the part in the middle. What does that mean? So whenever we do not know what causality exists, we talk about disorder. So in this state people go back to their own comfort zone when making a decision. Maybe they are stuck and they do not know where they are, what to use and how to handle problems and challenges. But the most dangerous part is the catastrophic one. So whenever we have chaotic situations and people try to become a master of the situation by using simple best practice solutions. That is really catastrophic because it does not tackle the problem at all and even worse people do not even accept the situation of the chaotic situation. So we try to manage it by oversimplifying it. Remember that pattern from a little bit earlier and that is really risky. So when we take a look at the Canadian framework, those gives us a hint. So, where are we, what can we do in order to become master of the challenges that we are facing. And if people tell you the next time that things are complex, maybe you can make use of your new knowledge and try asking questions like, is it really complex or are you feeling in a state of disorder and do not know what to do. In our organizations we often have to affect that we only know in retrospect whether the action that we took helped us and was suitable for the cause or not. So therefore keep in mind now we have a shared understanding of what complexity is and we can take a look at some examples. I would like to make it more practical with you and therefore we brought three sentences and three situations that we are faced with a lot. And we hear those sentences at least every other day and maybe you can relate at least to one of those three sentences. The first is, I'm afraid to make mistakes. The second is, we need to be more innovative. And the third is, I have a one-track mind or ich bin im Tunnel, which is very often heard and said by developers. So, those three sentences are your new sensory and your detector and now we can take a look what does this mean, what constructs do we have in psychology. So, let's start with, I'm afraid to make mistakes. I'm afraid to make mistakes. You can set that sentence in the context of teamwork and collaboration. And some years ago Google did a lot of studies on collaboration and they found out that high-performing teams need so-called psychological safety. What does that mean? It means that psychological safety is the state where people feel comfortable and do not have any kind of fear in the situation and team they work for. So, this is a belief where one knows I will not be punished, I will not be humiliated, whatever idea I come up with, whatever question I ask, whatever concern I raise or whatever mistake I make. It feels for an individual like taking an interpersonal risk because I'm offering that I do not know something or I have a crazy idea and therefore I need to feel very safe when working in a team to raise those aspects. Amy Edmondson, she is an American psychologist, she started working with medical teams a lot of years ago and she measured their psychological safety with a self-assessment and observations. And she observed the following, she said her results showed that psychological safe feeling teams to make more mistakes than teams with a low psychological safety. But taking a second look on this effect tells you a lot about the effect that it can have on organizations and collaboration as a whole. Because when you feel safe you admit more that you have made a mistake and therefore people can react and maybe help you earlier before things are getting, for example, catastrophic. So, when teams feel safe, it is very beneficial for the organization because whenever something occurs people will raise their hand, will state their concern and then we can solve problems together. A simple question is how often in your team context did you not say what you should have said because you didn't feel comfortable with raising a question or stating a concern. So, whenever you observe that people do not discuss within a discussion, a meeting, a team event, but afterwards or only with a certain person, that is a hint that there might be a certain state that psychological safety is not given. So, when you observe it, you can maybe use the questionnaire and some items are when someone makes a mistake in this team, it is often held against him or her. Or it is difficult to ask other members in this team for help. And the scale and the items developed by Edmondson can be a helpful tool for you and they are also set in relation to the learning behavior and the shared culture. Another thing that you can be aware of from now on is how fairly are your language shared steps to be rooted within your team. Is there only one or two people talking a lot and others being very quiet. That can also be an indicator that there might be a certain situation that needs a focus in order to solve it and make people feel psychologically safe. So, that was the first question, one of my favorite psychological constructs and I hope you can make use or maybe relate to the first situation. The second example is that we are often confronted with a situation that we hear we need to be more innovative. Especialy, when we start working with new teams, we ask what help they need, we ask leadership people what are your expectations, what do you need, where do you need help. And they say we need to be more innovative, but mainly they mean you as a team need to be more innovative. And no matter what, there is no doubt that organizations need to be innovative in order to be sustainably successful. But what does it mean to be innovative, what is the process behind it. And this is an area of interest that is already pretty well known within psychology, but little known within the IT context. And maybe it helps you to have a more precise understanding. So, there are four phases of identity identified by West already in the 1990s. And first, whenever you have an idea, it doesn't have to be at work, it can be at home or on the way to work or having a coffee. You might have an idea. And whenever you feel comfortable with it, you might grab a coffee and then a second step, talk to somebody in the kitchen, to your colleague and tell the person about your idea. If you tell it to more and more people or even raise it and ask for a budget, you might get some resources like money collaborators in order to be able to create a product. And when you have created a product and customers buy it, then there might be a certain habitat of standardization, making routines, making it better, customizing the product. And if you have made this experience, then you might probably be in a situation that you raise another idea whenever it comes to your mind. And this circle, those are all aspects that are needed in order to be innovative. And we can measure that with assessment, for example, which is called the Team Climate Inventory for Innovations. And what does it mean to be innovative? Remember, what does it mean that we have winter in Hamburg? We can break it down into four dimensions. And the first is, do we have a shared vision? Do we have a shared goal? So people know whatever aspects are helpful to reach that vision. And therefore, they can really make the first check whenever I have an idea. Does it help us because it relates to our company vision? Do people feel safe enough and remember psychological safety to raise even crazy ideas? Do people feel safe? Do people have people they trust and so they can share their ideas? The third thing is, do people in our company get support for innovations whenever they come up with ideas? Do they get time? Do they get money? Do they get collaborators in order to form a new product? And the fourth aspect is, whenever it's about standardization, do people have the knowledge how to do that, how to improve, how to make customer surveys in order to improve the offer that we make? And then they realized that there's another aspect that is pretty important, which is the so-called social desirability. People have the tendency to behave the way that others might respect them or even support them or think that they are cool persons in a way. And therefore, the aspect of social desirability is very important. Do I state things or behave only because I hope others thereby like me? And this is another aspect that is measured within this questionnaire. So, whenever you hear the sentence, we need to be more innovative, you can make the check for those aspects with the vision, participative safety, support for innovations and task orientation. Another example is, D, I have a one-track mind. So, being in the tunnel, what is that with a psychological perspective? We call it the flow experience. And flow experience is a construct that is also really old in psychology. And that means to come up with new ideas and deliver them. And as soon as you experience the flow, it is a state of mind. When you feel optimally challenged, you are fully observed in the activity and it feels very enjoyable and engrossing. In addition, research found out that it is beneficial for an organization because things relate to one another, remember the different levels that we have. Here we also have the opportunity to measure it with a questionnaire. In two sample items, I have no difficulty concentrating or I don't notice time passing. That can be indicators that you experience flow. There are always two sides of the metal and the other side of experiencing flow is that people do not want to feel worried when they fail and they do not want to hesitate to make mistakes. Remember the psychological safety. So, this is another construct where we can take a look at. And when SDNI started measuring more and more things within software teams, we were getting confronted with, is there any relation between those things that we can observe. And therefore we started a first study this year and took a look at whether the perception of a team climate for innovations has an influence on the experience of flow or worry. And research several decades ago always dealt with innovations. And in a study from January of 2020, there were several characteristics related to the team climate for innovation, but not the flow experience that we hear a lot when we work with software developers. And therefore we asked software developers to state their opinions on those two effects. And I want to briefly share with you the results of our first study. When it comes to the perceived team climate for innovations, we used here a simple ample logic, so traffic light logic, which says when it's green, we have no problem, when it's yellow, oh, there might be a certain attention, and when we have a red issue, then there's really a problem. So, we want to share these results with you. And what we can see here is that when we compare the results that we perceived with norms and comparable groups, we see that for software developers, there's often the situation that they do not have a clear vision, they do not have clear goals that they can contribute to. Whenever they have an idea, they feel supported, they get the resources, and they know the task they have to do in order to optimize the product. Also, we had an anonymous survey, the social desirability tendencies were pretty high compared to others. So, this was the first hint that it is important to work on a shared and clear and inspiring vision whenever you want to be more innovative. We also took a look at the flow and worry experience, and what we found out is that software developers experience a lot of flow, which is very cool because, remember, it is really enjoyable for an individual and it is beneficial for an organization. But on the other hand, they were pretty much worried about making mistakes, which also gives us an indicator because they act very socially desirable. And therefore, this is an important point because we hear a lot about making mistakes, it's okay in our culture and nobody will be punished. But how can it be that those teams do not feel it, although leaders state it? And that can be a point to work at in order to improve the collaboration. What can we see in our numbers? We found out that a high perceived team climate for innovation is significantly related to the flow experience. So, it was a positive correlation between team climate for innovations and flow, but we could not find any significance correlation between team climate for innovations and worry. So, we set up another survey and we will do more research on those effects. So, remember the three things to keep in mind. It is that organizations are very complex. And now you know what questions you might have to ask in order to figure out whether it is really complex or just a state of disorder. Remember the example with the weather conditions and that you can make whatever you want and are confronted with measurable. Remember you can make it measurable if you ask people directly, if you observe things, if you make use of data, if you do experiments. However, you want to tackle the problem that you identified. And the third thing is we have different levels. So, to summarize that, accept complexity successfully. Because the first thing is that you accept it, you know it is, you can only inspect and adapt and try new things to be successful, but we will only know in retrospect. The second aspect is measurability. Make measurable whatever you can. Set your goals, share your vision, take actions and then come closer to your goals and the aims that you have. And the third thing is identify the level where you observe the problem. Is it within individuals, teams, leadership or the organization as a whole. And then adjust your measurements and adjust the actions that you take. And I hope from now on you won't be afraid of detail and complexity also when it comes to teams and not only when it comes to business and to customers. So I hope the priorities will shift and you realize that customer and business numbers are as important as team numbers. Remember all the options you have. You can observe things, you can realize and then identify what to do. Take a look in the theory books, get inspired on things that you can take an eye on. Ask people about their perceptions, be aware that there is often a difference between perception and action. So therefore you might need another setup like an experiment or take a look at certain numbers to make things measurable. Identify and define actions and I see whether you can closer to the goal that you want to reach. We started with two studies this year. The first one tackled the individual and team level and now we had a second study taking a look in the leadership level and organizational level. And next year we will conduct a third study as part of my PhD. And we are about to pre-register this study right now. So 40 Minuten ist nicht viel für die Organisation der Psychologie. Aber ich hoffe, dass ihr jetzt ein bisschen mehr interessiert und nicht affiniert seid, um Verkauf, Emotionen und Menschen zu verarbeiten. Und wenn ihr euch etwas mehr wissen wollt, was wir tun und was wir machen und vielleicht ein bisschen Inspiration oder Einputen bekommen, dann könnt ihr euch mehr Informationen finden auf unserer Website. Und ich hoffe, dass ihr euch zumindest von jetzt auf nicht nur interessiert, sondern auch interessiert, wie cool Dinge werden. Danke. Und mit mir jetzt ist Laura und es T. troll- Style M Lincoln Erren. und die early beginnings of using such frameworks or such measurements are never in the software or IT area. It's always in more established areas, especially in medical teams and learning environments, anywhere else. So you can use it wherever you want to use it. It doesn't only have to be at work. It can also be within your freelance activities, within sports teams. Für etwas Wat an Hosten, etwas 지원 whenever I make? An Jeg Survey is that high or is that low. And then you can focus on other reference groups on normtables. And for example, for the team climate inventory for innovations, there are several normtables where you can compare your values to the same as for flow and worry. So no, it's not only useful for software teams. Yeah, I saw in the last slides there just a short answer on that, that there is something planned for 21. So the oncoming year now. Is there anything this community can can take part in the studies? Yes, that would be really great. We plan the third study in the beginning of 2021 and the target group for our next third survey is also all roles that are important within software teams. So it's not only software developers, product owners, from master's, agile coaches, but also leadership roles are part of our interest. Yeah, so whenever you want to join our third study, you can pre-register on our website and we will let you know whenever the third study is live. Yeah, we have another question incoming. You say the flow number for software teams is very high. Do you know some examples of fields, where it is high as well or where it is very low? By now, very high or very low. I cannot tell you, this is a third job group that is comparably really low in experiencing flow. No, but it's comparably high because you can take a look in the statistics and normtables for the flow short scale. And if you're interested in that, you can also go on to the source and see what other target groups are within the studies. All right, there's another comment coming in. And I think that's the last for now. So it means that the framework can be used at open source development teams also. Sure, yeah, any kind of team. Then always be aware that whenever you use self assessment surveys, you also have a certain kind of bias because people first need to realize that something is taking place, then they might reflect. And then the third step is that they give an answer and might be afraid of the social desirability. So this is the fact that you have to keep in mind whenever you use self assessment surveys. So another way of triangulating and taking another perspective is making use of certain kinds of experiments or measuring things like the chat protocols, how distributed are your language shares. Everything can be done anonymously. So no afraid of the data that is being used. But yeah, that is a very interesting part to have those perspectives. The one that is the self assessment perspective. And the other one that is the measurement done within an experiment or making use of data you already have. All right, then Laura ist here. Thank you very much for your talk. Thank you for the questions you answered. And I think we should do some self assessment now and wish you all lots of fun on RC3. Enjoy the 2D world everywhere. And have a good night. See you later.
We work and research in the field of work and organizational psychology in tech teams and companies. In this talk we will give insights into: How can software development teams be supported by organizational psychology? How can the collaboration in teams be changed measurably? Which factors have a measurable influence on the work of software development teams? Over the past decades, the interest in research and practical recommendations on innovation climate at team and organizational level has grown. Furthermore, the positive effects of flow experience have become increasingly present at work. The influence of leadership is commonly agreed, but little research has contributed to identifying the suitable leadership style for this target group supporting the presence of psychological safety and a climate for initiative. For the target group of members of product software development teams in Germany two studies were conducted in 2020. Study 1 focused on the team climate for innovation and the experience of flow and worry (N = 323). This study identified a significantly positive relation between the perceived team climate for innovations and the individual flow experience, whereas there is no significant relation between the perceived team climate for innovations and the individual experience of worry. Gender has no moderating effect. Regarding the four dimensions of the team climate for innovations, the expression of vision is relatively low for the target group of product software development teams in Germany compared to the norm tables. Participative safety is comparably high and task orientation and support for innovations are moderately distinctive. This means, all dimensions, except participative safety, need interventions in order to strengthen the team climate for innovations itself and thereby foster the flow experience. Since the extent of experiencing both flow and worry is relatively high, software product development itself and respectively the work environments seem to be stimulating, but also concerning for team members, which is why these aspects need action. Study 2 (N = 121) focuses on leadership styles and the relation to psychological safety and the climate for initiative. This talk will give a short insight into the used psychological constructs, followed by showing some research results and giving explanations on how to make use of the findings for working in software development teams.
10.5446/51937 (DOI)
Yeah, welcome everyone. I'd like to welcome Cal McDade. He is an expert on core mobility network signaling security. So he knows everything about the SS7 network interconnect standard and his recent achievements include the discovery of the SIM jackable number vulnerability. And that was not the one where you just call your phone company and say, Hey, I lost my SIM card. Please send me a new one. But it was or it is an attack which runs in the background like a silent SMS taking over your account for a short period of time. He works for years and almost decades in the field of telecommunications, messaging and security, and is also often a contributor or a guest on different worldwide information. So we will switch over to Ireland in a moment to hear things about how surveillance companies attack mobile networks, not only in Europe, but in many different parts of the world, for example, in the area of South America. And they try to track the location of mobile phone users and also taking different measures to use the phone for a certain amount of time. And within this talk, we are up to analyzing the data being used, enjoy the talk and see if the questions can answer in the follow up. Good afternoon, everybody. Welcome to the presentation, watching the watchers, how surveillance companies track you using mobile networks. My name is Colin McDade. I am CTO of adaptive mobile security. And what we do is we help mobile operators around the world defend their telecom networks. Today, I will be taking you through the world of mobile surveillance companies seen from our experience in detecting and blocking them. I will be explaining what they do, how they do it, how they have changed over time, and what we can expect from them in the future, along with plenty of examples. And interestingly, this is actually quite a topical subject at the moment. Surveillance companies are often in the news, but these three headlines are all from this month, December, which all covered roughly the same area about how surveillance companies are using mobile networks. And there have been many other headlines in the previous months and years for the last few years. But before I jump into the details of these surveillance companies, it's worth remembering how we got here in the first place and why we are discussing this. Today, almost every network around the world uses 2G or the 3G network protocols. And what this and how these networks work is the user protocol called Signin System 7, or SS7. This is the backbone network, which allows mobile operators to communicate within their network and between mobile networks. And what allows you to roam when we used to roam, send text messages abroad, make phone calls, be connected, and so on. And as you are probably, many of you are aware, there's been a lot of reports of security incidents with this over the last couple of years. And these all stem from one key assumption in development of the SS7 network, a simply that assumes trust between every mobile phone operator around the world. The network was designed at a time when everybody assumed that only those who have access should have access, simply a trust model. And as it turns out, this hasn't been the case, as there has been some connections which have abused this trust. Interesting enough, the protocol which is replaced the 2G and 3G network in many places, the 4G network, also suffers from the same protocol, this protocol or same problem. This protocol is called the Amdler and the same trust issue exists in that it also assumes that everybody who is connected should have access and will not do anything malicious. So this is one small key takeaway often is described that the problem with the mobile phone networks is its age, because it was developed in the 70s or 80s or 90s, it wasn't designed with security. Well, a protocol designed in 2010s also has the same problem. In fact, it's even slightly worse. So the problem itself isn't so much the technology, it's just the trust and the security assumptions in building the technology at the time. So keeping in mind the security implications, we can now look to see who is actually exploiting this trust model. Well, we see three main types of exploitors. One, there's finance companies who are speaking about this. Second, governments. Here is a screenshot from a report from the Ukrainian regulator. This is from 2014 and was one of the key events in pushing the development of signal and security. This is a report today issued concerning attacks or malicious activity, which she observed coming in today are networks from what they believe were Russian sources in 2014. And finally, of course, on unexpectedly criminals, criminals we've also seen exploiting these networks. There is some overlap between surveillance companies and governments, as you might expect. Governments are often the customers, they want to buy this equipment from surveillance companies, but sometimes governments may try to build this technology themselves, rather than rely on surveillance companies. And when they do, they often use some of the same sources as an entry points as surveillance companies. We also see also small overlap between criminal activity and surveillance companies. Again, sometimes there is an overlap in the sources and how they gain access to these networks. One important thing that those surveillance companies keep in mind is they have very large resources to get paid a lot for what they do. And these large resources translates into complex attacks and quite sophisticated technologies. And we'll see this as I go in more details about how attacks are executed. So attacks and how they execute it is a very interesting point because it's not always apparent exactly what is an attack over these signaling networks. First thing to keep in mind though is that the industry is very different from 2014. From 2014, we in the industry have been recommending ways for mobile operators to protect subscribers and their networks. And the key output to this is series of recommendations, standards or documents, if you will. For the 2G and 3G network, which is S7, the key document is a document called FS11. And for the 4G network, which is a protocol called diameter, the key output is a document called FS19. And so what the operators do around the world is they take this information and then they work with mobile security companies like ourselves or other vendors to put in place protection and firewalls and defenses based on these recommendations. One particular thing to keep in mind though is that this is just a starting block. So when they apply in these recommendations, they find that a vast, vast majority of traffic versus S7 traffic in this case is completely normal. But there's a very small percentage, in this case 0.04%, which we see, which is irregular or suspicious. Very important thing to keep in mind though is a regular or suspicious does not necessarily ecumilicious. When you actually look at this 0.04% traffic, the vast, vast majority of it is just noise. It's misconfigured, nodes around the world, local specific configurations, and so on. The vast majority is not actually malicious. When you investigate this in detail, as we believe you have to do, you find then a very small percentage is 0.04%. 1.37% is actually malicious. And this is an important point. Not everything which an operator may block may actually be regardless malicious. A lot of it is just noise which they are blocking primarily to be safe and to be certain. And it can take a lot of experience. It takes a lot of analysis to determine what is malicious versus what is simply irregular. And it can be quite easy to make mistakes. If you sometimes you read headlines of huge attacks using S7 network, in many cases what's happened here is that the person analyzing may have regarded all this type of traffic malicious, but that isn't the case. It's simply irregular. In this report, in this presentation, we focused primarily and in fact exclusively on what we regard as malicious types of traffic. So looking at the traffic itself, let's look to see who generates this. So one question is, I have this question, what do mobile surveillance companies do? In our experience, can you guess a surveillance? And it primarily breaks into two main areas. When it comes to S7, mobile surveillance companies spend most of their activity, 60% of their activity, harvesting information. And then roughly about half that again, spend about 30% doing the actual tracking. And I'll show you how that ratio often shows up in real life attacks soon. They also spend a certain amount of time as well doing testing, as well as that to spend a small percentage of time doing actually interception of calls and text messages. You may expect that to be larger, but that is the next case. The vast majority of time surveillance companies are doing tracking, or they're doing information harvesting. And the reason for the information harvesting is simply to help their location tracking. This is, as I mentioned, S7 activity, which is the 3G slash 2G network. But for the 4G network, they also use this protocol called diameter. I haven't shown that in these stats here. Their malicious activity over diameter has been quite small in the past, but we have seen a large increase in it recently. And for one picture of surveillance company, we do see also S7 activity, or sorry, SMS activity, and I'll go into more detail about that soon. So how is location tracking done via S7? Well, first of all, if you want to get more public background information, I really recommend you go to take a look at two XM presentations from an earlier edition of this case, community club 31C. And that's from Carson Naughn, Tobias Engels, Engel, and they gave a very good overview of how these attacks are executed. But from a high level, there's two different ways of doing this, a direct method, where an attacker will query a node called the HLR. The attacker will send in a phone number, which is a mizzen, and get back a cell ID, or an indirect method, where the attacker will first use the phone number to get some background information, in this case, the mzzen and msc, and then uses information to query a node deeper in the network directly to get back the same information as cell ID. These two parts of these attacks, the first part of the main part of method one and the second part of the method two, are the location tracking part. But this is in part beforehand, the method two is called information harvesting. Now you may ask, why does an attacker do this at all? Why should they use method two when a method one is more direct? Well, this is because things change. Mobile operators are putting in defenses, and now it's a lot harder to do method one. Surveillance companies have essentially a toolbox of commands that they can use. Then there's three main commands that they can use in the SS7 network, a TI, PSI, and PSL, which stand for the information in the table here. And from their perspective, they have pros and cons in using each of these commands. And their primary pros and cons of decision making point is often down to what will work in the operator that they are targeting. And this is mainly based on what defenses the operator has. So to show this, uh, diagram, diagrammatically, we can show this graph here. And here I've plotted out in the axis is two main pieces of information. In the bottom axis is the possibility of this attack to be blocked. And on the left axis, the vertical axis, the amount of information that an attacker needs to have to be successful. And you can see these three previous commands that we spread out like this. This is the, it shows the amount of information which an attacker might need on the left. And like I said, on the bottom, the possibility of the attacker to be blocked. Where an attacker really wants to be is in the bottom left segment, because here it's more likely that their attack will be successful or less likely to be blocked and they need to get less information. So we can see that the ATI there, the possibility of it, if it's been blocked, is quite high, but the amount of information that the attacker needs is quite low. It just needs a phone number. Whereas the PSI in the top left, the amount of information the attacker needs is high, it needs phone number and more details. But the possibility of the attacker to be blocked is less again. So you can see there's a distribution and there are choices to be made by an attacker. I'll show those choices and how a real life attack actually occurs. Here is a real life attack from March 2018. In this picture case, there are several stages of the attack. First of all, there's an information harvesting part of it. And this point in time, we've seen, we saw two attacks, two packets come in from two sources in the UK Channel Islands. These are two operators, Shure, Guernsey and Jersey Airtel. And these are, uses command called SRISM. This is a standard information harvesting type method. Then there was another information harvesting, using two different other types of packets. These, confusingly, look very similar, but they do slightly different things, again from the same sources. And then we saw a third series of information harvesting, again, two SRISMs from the United Kingdom, but one also packet as well from Cameroon. And then finally, at this point, we saw the actual location tracking attack. Here we see four ATIs, one from Jersey Airtel and then one from Cameroon, Israel now. It's important to note, in this particular case, that all these attacks were actually blocked by the operator, so no information was retrieved. And the ATIs at the award, the end, was more an element of desperation from the attacker. Also, this is all within a five minute period. So you can see the sequence of attacks is relatively quick between all five. In this particular case, the attacker was in quite a hurry. Has wives in a hurry and the actual target? Well, this is actually what occurred. We subsequently learned that the targeted mobile number was associated with this person, Hervé Jobin, who was a French formal naval officer and marine engineer. And the aim, we believe, of the attack was to see if the number existed and if so, its location. This is a video of the stromo, a boat, which was being believed this person was on at the time. And there's quite a bit of discussion about on the events, geopolitical events around this case in this article. And for more details, I encourage you to go to the link to get the complete story of what has actually occurred around this time. So that is the SS7 network. But now let's look at to see how it happens in other networks, particularly the forging network, which uses the diameter protocol. This is very, very similar. Again, there can be a direct method, in which case the attacker can use a command called UDR and then retrieve from the HSS or SLID or an indirect method. And in this case, there's nothing to stop the attacker using an earlier packet from an earlier protocol, in this case, SS7, to get the information because this is simply the information harvesting part of the phase. So assuming that they do this, they harvest information using this command at the time, they then use this information using an IDR command, and then retrieve the SLID from the network target and network. Again, two methods, location tracking is the key part to receive information. But information harvesting is really the prerequisite piece of information that needs to happen when the target's network starts putting in protection. So again, there's a toolbox of commands that the subscriber can use. And as you can guess, each one of these commands also has pros and cons of whether it can be used successfully or not. So to show that again visually, with the same graph, and I recreate again the three commands, which you saw earlier for SS7. If I plot these three diameter commands, we can actually see that they occupy somewhat of same or similar positions as the SS7 commands, the two in the bottom right, PLR, UDR. These type of attacks an attacker would want to use in an ideal world because they require less information. But these are much more likely that an operator will successfully block from the start. So in many cases, what the attacker ends up having to use is a command called an IDR, which is in the top left. They need a lot more information for this to be successful, but it's harder for an operator to actually block this. So what these look like in real life. So this is a real IDR command, which we saw actually just a few weeks ago. And in this particular case, the command is insert subscriber data IDR. And where does this coming from? Just to keep the team going, this is actually originated from a network, this originated from Jersey Air Thrill Network again. And the destination of this is a, first of all, a subscriber, a username, and a destination network somewhere in this region, this is mobile country code, geographic region five is Asia Pacific reason. And in this particular case, they're requesting their current location. So there is no reason why a network in the channel isans should be requesting the sell ID of a subscriber who is in one of these networks. But this is what we actually see in this case. And this is a location track and request over diameter. I'm showing again the channel isans, but there's multiple networks for sources of these attacks, which can happen as we see over diameter. So one important thing to know is that I've shown you 3G and shown you 4G, but this surveillance companies don't necessarily think in the world that way. They see mobile technology as the tool, not as a patch. So the recap is a surveillance company, they want to target their targets, obviously. So what we've seen over time is that they can execute SS7 attacks using the 3G protocol. What can happen then is the mobile operator will start putting in place protection and build puts in firewalls to prevent these actual types of attacks. Then over time, the surveillance company might switch to 4G to use the diameter protocol. And again, what will happen eventually is the mobile operator will put in place defenses to block these types of attacks. So what you might expect in the future then is maybe that the surveillance company might use variants of these attacks, different ways of doing it, or eventually might use to move to the 5G protocol. And again, our mobile operator will put in place firewalls and we just got to protect the wall to protect us. Now it will be brilliant if the world worked like this, but surveillance companies don't think in a linear patch. From their perspective, all they care about is the target. They don't care what technology they use and they are not beholden to development plans and technology schedules. They just want to get information on the target. So in this particular case, what happens if they can get sources of their attacks within the network, then that becomes a very valuable thing for them to aim for and a very valuable tool for them to use. And this is what we've seen with this next type of attack, which we've seen the attackers use. This is a attack we call Simjacker. And this is why it's so valuable in that allowed them to essentially bypass the plans and the top processes of the industry in defending against surveillance companies using mobile technologies. So to step back a moment, what exactly is Simjacker? It's essentially a vulnerability, which we reported last year, 2019. And what it is, is uses a vulnerability in a SIM card library. In the SIM card library, it's called the SAP browser. That's pronounced as SAT or SAP browser. And the problem with the SAP browser is it did not validate or authorize any source SMS that it received. So this vulnerability then could be exploited by text messages. And once a text message was sent with SAP browser commands in it, it then was allowed access to a subset of what are called SIM toolkit commands, which are on the mobile device. We issued a very detailed report, and it's an over 40 page technical report, which is free online from www.simjacker.com, which I recommend you to read. But we found when we analyzed this that this vulnerability was present on several hundred million SIM cards around the world. And we could see, and I'll show you examples, that it was actively exploited in these three countries in Latin America. We shared a CVD, a coordinated vulnerability disclosure within the mobile industry mid last year. We reported some information 2019 before giving more technical information in October 2019. That staggered approach was to give time for mobile operators to put in place defenses and to see if they were actually affected or not. And the key thing about SIMjacker is one, it was a huge increase in complexity. It was the first recorded spyware actually sent within an SMS. There has been some rumors and reports from leaks from the NSA of this type of capability, but this has never actually been seen before in real life. But as well as this, it was also a huge increase in capability and allowed one surveillance company in particular to do a lot more than what they have been doing in the past and possibly to achieve a lot more results and to offer new services to their customers. So the flow of how this actually works. So in this particular case, a surveillance company to do a SIMjacker attack, they don't need SS7 access. They don't need to buy expensive commitment. They don't need to buy links. All they need is a mobile device. They send this mobile device, they are sorry, they take this mobile device and they just send, simply send a text message with a series of commands in it to their target. This text message is then forwarded onto the target and is received by the device. When the device receives that text message, it actually gives it to the SIM card within the device and then the SIM card takes over and this is where the term SIMjacker came from. The SIM card will then instruct the device to provide information. In this particular case, cell ID and this information is sent back to the SIM card. The SIM card also requests a bunch of other information as well, such as the type of device and more attributes. But once all this information is received, the SIM card will then instruct the device to send out a text message. In this case then, the device will send this text message directly back to the surveillance company and their mobile handset. So there is a seven used in the fact that all S7, but there's no need for expensive access. There's no need to try to gain or try to avoid SS7 firewalls or damage firewalls. They're simply using text messages here to initiate and do the attacks. So this is the location tracking part of the command, but in fact the whole sequence from start to finish is a whole location tracking phase. Again, if you want more information about this, I really encourage you to check out the paper on simjacker.com because it goes in far more detail about this. I've shown here this is a meted one sent from handset, extracted handset. The surveillance company use many different methods. Sometimes they extract it to an SS7 node. Sometimes they send from other types of links and use multiple methods to try to avoid defenses. But this is in most basic shows you how they actually executed this attack. Again, two box of commands that the attacker actually used. The great thing from the attacker's perspective here, their pros, they don't require any SS7 access. All they required is a phone number, the phone number of the target. Now the con also from their perspective is that they needed to have the destination, the victims handset to have a SIM card with this browser on it. Your SIM cards, vast majority of European SIM cards, don't have this library on their SIM card. It was a certain percentage when we analyzed, it was several hundred million. That's a conservative number, but not every SIM card around the world has this vulnerable library on it. So that was a particular con that they have. And also sometimes, not often, but sometimes some operators put in place security around that. So the default deployment of the library was vulnerable, but sometimes some operators have made some changes to make it non-vulnerable. So this is a con from obviously from the attacker's perspective in that not every mobile device around the world had this library present. So if I'm to take my grid again and the deployment and sorry, the distribution of these attacks, if I can say that SS7 is distributed this way, and then diameter is distributed this way, one way of visualizing a SIM jacker attack is very much in the bottom left quadrant, because the possibility for it to be blocked was actually quite low. It requires specific logic and algorithms which most operators may not have had in place, and the amount of information that it required was very, very low. Again, all it required is a phone number. That aspect is, of course, though, on the presumption that the targeted number had a SIM card which had a SAP browser present on it. So if there was no SAP browser present on the target SIM, then it wouldn't be valid. So now I'm going to go into details. So I'm going to go into the data section. So now I'm going to go into details about a particular attack. I just want to note before going to examples in this one, the vast majority of SIM jacker attacks which we saw were sent from a handset to another handset. As in, I would target, an attacker would target you, and once your phone registered it, it would send it back to a handset, which would be normally within that network. But sometimes what we saw is that they would try to extract via a s7 address. And this is the case I'm showing here. But essentially, this is an attack we saw a few months ago. The destination is a Mexican phone number. It is coming. The message has been sent from a Mexican mobile as well. So this is a mobile to a mobile, our handset to a handset type for attack. And here we can see the actual payload. The SDK protocol indicates this is the SAP browser payload and within it is requesting a series of information to one that I've highlighted here is requesting location information, but it's also requesting IMEI information, which is the exact type of the actual handset. Once all this information is received, there's a concatenate command you see there. And that means it all puts it into a blob and then this blob is sent outwards. And it is sent outwards using the send short message commands. So this command instructs the handset to send out another text message with all the information which has been received. And this information will be sent to a s7 address, which is again is registered in Shira Guernsey. And for those keeping track, that's actually the same as seven addresses, we call it a global title as the attack, which I showed in the s7 example. And this type of information is quite useful for us sometimes to do correlation and association of different types of attacks. You can probably see this looks quite complex. And it's a certainly a more sophisticated type of attack than what we see over s7 or even diameter. And a lot of work, a lot of effort has gone into putting together these types of attacks. What it does do is it really opens up the avenues for the attacker, because like I said, they don't need s7 access for this. In my particular case, once we reverse engineered these types of attacks, it was quite a sobering concept to realize that I was able to, I had the ability myself to track potentially several hundred million people using text messages just by using text messages. And this is before these mobile operators put in place detection and blocking all these attacks. So certainly a very, very powerful technique. So stepping back a moment and going back to generalities, I've shown you the Simjacker type of attack. Well, we ask ourselves, how does Simjacker rate to s7? So this is data from the end of 2019 and a start of 2020, the second half of 2019, the start of 2020, and it's from specific operators between eight to 10 mobile operators we've taken data from. And these, by the way, are all attacks which have been blocked. But we can see that the vast majority of attacks we've seen have been using s7, roughly two thirds, and then one third of surveillance types of tracking attacks we've seen using Simjacker type techniques. Now, like I said, according to our intelligence, when you want surveillance company uses Simjacker, but the reason why the Simjacker volumes is actually so high is that we have one or two specific operators where the volumes are huge, where the volumes of Simjacker attacks are huge, and these are skewing somewhat a roval of statistics. Show you this, if we were to show you the stats from one particular operator, I call it operator A, in that particular case, the vast, vast majority of location tracking attacks which we see have been executed using Simjacker, which is SMS, and only a small percentage, 15% has been executed using s7. And believe it or not, this is much smaller number of Simjacker attacks now than it was in the past. Prior to public announcement and prior to us putting in place active detection and blocking of these attacks, the ratio was much, much higher, many, many times higher, in that the volumes of location tracking attacks using Simjacker were absolutely enormous, was many, many times higher. The diameter attacks might be surprising, but we see quite a small, very small percentage, up until the last six months of diameter attacks. But somewhat surprising to me and possibly a shape of things to go, in the last six months, there's been a large escalation of diameter attacks, and I haven't shown the average attacks in these stats, but in future reports, we can show them. So coming back to the Simjacker versus s7 distribution, we, and I'll show you why, a work in theory that we have behind this, obviously mismatch between certain operators, is that they're different, we believe that there's different end users, there are different types of end users for these surveillance companies. And this could be best probably shown with the following graph. So we're all aware, unfortunately due to COVID of rates per 100,000. So to build on this, what I've done here is to try to show a distribution of location tracking attempts per 100,000 subscribers in one year. So this is a way to show an easy reference, the rate of tracking per operator, because some operators are much bigger than others. And as a result of we showed the exact volumes, the actual numbers become very skewed. But you can actually see here, it's actually quite standard. I've shown nine operators here and the s7 location tracking activity normally ranges between 150 to maybe 50 location tracking attempts per 100,000 subscribers. So this seems quite standard, and it seems like quite well, quite evenly distributed. But the interesting thing if I start to add in SIMJacker activity, we see in one particular operator that the amount of observed SIMJacker location tracking is huge, brings it up to roughly about 400 times. And this is actually with us doing this detecting and blocking. So when we go and detect and block these attacks, we actually disturb the system. It's not like short insurance cat, but it's basically our act of observing and blocking this has caused the system to go out of equilibrium. We believe from our estimates from analysis, prior to doing this detection and blocking, that this was the extent of the SIMJacker type activity up to over 1200 location tracking attempts per 100,000 subscribers. We've less regular evidence, but something we're trying to firm up. In fact, we believe in another operator, it was actually even higher, even up to around 2000 or possibly 2000 above location tracking attempts for 100,000 subscribers. So we can see that the actual usage of it was much, much higher in these operators, in this particular operator. And the reason why we think, and what this gives us a few conclusions, is that we could say, at least from what we've observed is that S7, the 3G, 2G, 3G protocol is not normally used for bulk subscriber tracking, at least by surveillance companies. But certainly SIMJacker was or is. It was a technology which was developed and used to bulk tracking of subscribers. And this was one key reason as to why we found SIMJacker, why we thought SIMJacker was so important, and that it really introduced a new way for surveillance companies to and new use cases for them to potentially offer. So something that we also see other trends over time. This is probably also interesting for some. These are the trends of S7 location tracking commands over time. So we can see in 2016, ATI, which if you can remember, is the blue color. That's the one that we said that is probably the easiest one for the surveillance companies, but it's also the easier one for mobile operators to block. So the volumes of data have decreased a lot from 2016. And then other commands PSL, which sort of midway data, a growth in popularity between 2016-2017. And now it's really drowned off around 2019. PSI, on the other hand, has increased and has been quite a steady. And that's the one that is hardest for the attacker to use. They would certainly prefer not to use it because it doesn't always work. But it's the one which the only ones which they may have any success with anymore, or they feel they have any success. Like I said, these are all blocked commands. One interesting point is you may see two new colors here. This is one called ATI and provide PSI. And you may say, this is the exact same command. Well, what actually is happening here is that the attackers have done a variant. They've tried to basically disguise these commands. They're trying to give these commands a new lease of life. And they're using a new sort of potential vulnerability called global upcode. If you want more information on this, I also recommend you check out this presentation from Positive Technologies from Hack in the Box in May 2019. But essentially what the attackers do is that they try to bypass protection in place. And if this works, this gives them a new lease of life in the ATI command, because then they come back to using their favorite ATI command. And this time, he may try to be able to bypass any defenses that are in place. So one question we often get asked is how do these surveillance companies gain access to the SS7 network? Sometimes that is closely followed by, and how do I gain access to the SS7 network? Which is, I've been the person who was asking me this question. So there's multiple methods, and a lot of this comes down to intelligence and research. But primarily, there's three main methods which are the most common. And one, as you can guess, is that they pay for the link. And this can be quite nebulous and very hard, sometimes to track down, but often they will have a, they will set up, these surveillance companies might set up a front company who then negotiate access to other companies who may resell access to mobile operators. So there might be multiple links here, multiple layers of who is selling access to whom. This is still not guaranteed to work for them, but it often works best for them in jurisdictions or areas which may have poor regulations or oversight are companies who may not investigate too thoroughly what these companies are doing once they get access. Many cases they may get access to use for one technique or for legitimate services, and then after a month or two start to switch to use other services which are malicious. The second method which they may use to gain access is to use big rudder. Governments, like I said, are the customers of these surveillance solutions. So what they may do is they mandate the system might be installed in the captive operator. Or else to add directly onto link, bypassing the operator completely. In this case, then the operator may have very little say in a matter. They've been told to install the system or in some, a lot of countries around the world, the operators might have direct connections to the backbone network and they can add directly onto link. This is less common than paying for the link method, but it can actually happen. And finally, something that's quite rare nowadays at least is old legacy connections, default companies whose access is not completely removed. This is much rarer because on the S7 network every packet has to be paid by somebody. So it's very unusual to have access to a network and nobody's charging you for it. It's also less of an issue in diameter than S7, but it was present in the past and it happened. There's also less common ways that operators may already surrender companies to be paid access, but I'm not going to go into detail in this presentation. One particular thing though is quite interesting is that first of all, as you can guess, the pricing of the access is not very opaque, but from our analysis we can see that normally between costs between two to 10 cents per message message, the unit that are sent. But it's very much that the more connections, the more access that a surveillance company has over the S7 diameter network, the much more valuable it is because this means if somebody's get blocked or get detected, it still has backups, still have different ways to send a tax. So it's very much in these surveillance companies' interest to have as many as much access as possible, as distributed as possible. This also leads to some rather bizarre business cases when you come across these. This here is a graph from an S7 tracking company or a purported S7 tracking company and the prices that they advertised on the web. And this is really much the opposite of what you would expect for an if you were an economics student because there was no economies of scale here. They were actually charging more, the more you tried to track and not less, which is quite unusual. But then it makes sense if you consider what they're doing. The more that you try to track, the more that you'll be drawing attention to yourself and therefore the more likely that the link will be disconnected and they would lose their entire business. So from their perspective, it's worth more to charge you more because they are taking higher and higher risk and not charge you less even though you're using up, you're actually paying for more. So it's a sort of an inverse of what you expect. And like I said, not really economies of scale type approach. So this is surveillance companies today. But also we want to talk about 5G and mobile surveillance companies. Now, I can guarantee you the 5G networks will be targeted for use by these mobile surveillance companies. And again, like I said, a start, you know, ages number, and this took the case newer does not always equal better. The 5G network does solve many security problems on mobile networks, especially on the radio side, and does make improvements on some of the core network side. But it also introduces new risks and new potential problems. Well, for a start, it's more complex. And anything that's more complex, inevitably may have more potential vulnerabilities. To show you how much more complex, here's a graph of the 4G network, first of 5G network, and it comes to protocol complexity. And see, it's many multiples of times more complex, both in the number of messages, which could be sent, that's on the bottom axis, and within those messages, how many different elements. And if you consider each one of those elements may have to be individually inspected and checked, this can make things much more complex when it comes to trying to defend these networks. So as at the 5G networks have new concepts like slicing, mixed networks, 5G networks talking to 4G networks talking to 3G networks. So there's a lot of moving parts, which could mean, which will mean, and our areas at mobile surveillance companies will try to exploit. One good thing, at least this time, is that unlike 3G and 4G, within the industry, we are now defining security from the start for the 5G networks. It's always much easier to put in place security from the start than try to reverse engineer insecurity. But one key thing we have to keep in mind, there's a difference between IT and mobile network security. We already know in 3G and 4G that the vast majority of attacks come from no legitimate entities. They come in from the SS7 network. So unlike possibly IT, where you can just block off certain IP addresses or sources, there's no way one operator can block off another content or often another country. So you do have to accept that you are going to be targeted and attacked. And so you need to put in place defenses to detect within those flows what's going on. And for a good discussion about this and why 5G in itself won't solve a lot of those issues, please see this blog I wrote within the GSMA, which covered the issuance of a new document, a new series of recommendations within the GSMA on 5G Internet Security. So finally, the toolbox in 5G starts to get very, very complicated for commands here. I won't go into too much details. Again, there's going to be pros and cons for the attacker. Unfortunately, as well, they also get multiple new ways to get location. They can sign up for events and subscriptions. So it's going to get quite complex and there's going to be a lot of work required to detect and block these attacks and try to stop these various companies. Again, with this grid of my previous attacks, 3G, 2G, 3G, 4G and Simjackr, if I add into the 5G commands, things get quite complicated. But these are my estimations of the distribution of these. Some of these commands are quite interesting, especially GMLC on a score PL. But time will tell if these distributions are correct and how often or what the attackers will try to use them and when they will try to use them. So I've covered a huge amount of information here in this presentation and thank you. We've come to the end of it. But there's a few key takeaways I want you to take and some key conclusions that we could take from this presentation. As you can guess, and as I show in your surveillance companies, they do exploit mobile signal networks today. But they're not static. They just technique space and defenses and end users. If you start to read articles about S7, it can be wide open, that is not the case for the vast majority of operators. Operators are doing things. It's also at the same time, the surveillance companies are also making changes as well to also avoid these defenses that are put in place. Another key point in 5G networks is that they are not invulnerable. If somebody says that 5G networks is fully secured, then that is not true. There will be opportunities for surveillance companies and they will definitely try to use it. Like I said, surveillance companies don't care about the technology path. They care about the target, but they care about getting information about the target and they will use whatever technologies that they can. And again, mobile operators, they can and many do detect and block attacks. And the key to do this is intelligence. From our perspective, the key thing is that like many types of security, you can't just press a button and walk away from it. You have to actually look after it, use your intelligence, investigate, because these surveillance companies have huge resources and they will make efforts to bypass and go around any type of defenses that you have in place. And then comes back to the team of this presentation. Why are we doing this analysis? Well, the reason we do this analysis is because if you cannot see what is going on, you cannot see if we stop it in the future. So watching the watchers as well as being interesting is also critical from a security perspective, because the more you know about what they are doing, the better you are able to detect them and block them. Thank you very much for this and this presentation. I've only scratched the surface of surveillance companies and their use of mobile networks, but I hope you found this information useful and I look forward to taking your questions now. Yeah, welcome back from this presentation so far. Large thanks to Karhal here for this talk. There are already lots of questions which came in for you. The first best thing about is what a great talk, a lot of information and very well presented. We have to thank you for that. There are a couple of questions still on it. Is there a list of these surveillance companies available? Thanks very much for that. Excellent question. That's a good question. I'd like to know the answer myself. I could really use that list. To be more serious, there's some journalists who've done some research on this and a lot of them have put up some lists about there. There is no definitive lists. Everybody knows probably the names that you've heard about, such as NSO or Circles. Circles is the division of NSO which does this. There's other companies like Rayzone, Verrents, companies like that. There's no definitive list, although journalists have looked at this. One thing to keep in mind though about these companies is a lot of them, we found I think some of them actually might resell each other or work with each other. It's often quite difficult to say that's one particular company. It was different companies. Sometimes they do have a bit of a coordination or relationship with each other. There's been some good articles on Forbes on this. Some articles we see in The Guardian and some other information which is out there, which is probably the closest basis we all any of us have to any kind of list. And do any companies sell historical geolocation data coupled to mobile phone numbers? I don't, these surveillance companies per se, I don't think that's the business that they're in. They're more fulfilling a request response type of business. Selling historical information is probably not what these surveillance companies try to do. We've all read about, heard about these other companies though, which are building up information on subscribers, maybe taking it for apps and so on. So possibly they may sell it, but these surveillance companies don't think that is the business they're in. They're more into filling direct requests from their customers at the time. So how long is the information gathered during the information gathering phase useful for the attacker? Will it outdate when the victim changes the mobile cell or is it constrained by time somehow? That's a good question. So in the information gathering phase, what they're trying to do is gain things like what's the, what's the IMZ perturbation and that really doesn't change too much. Whoever then they need to know what rough area the subscribers register that, like the MSC or the MME in diameter, that information can change, but not too much. So it does have a lifespan of possibly a few hours, days, at least part of part of the information of lifespan of that. The IMZ information might last for longer, but the information gathering phase is not just for use for direct location tracking. Sometimes they also use to see if this number actually exists. Sometimes surveillance companies, they're not fully aware. If it's a surveillance company trying to track somebody, they may only have partial digits of the number. They don't actually have the full number. So they're trying to see what they know, the first eight digits are this, and then they try to cycle through all the different digits to see what numbers actually exist. So from that perspective, if they do this sort of attack, then they can then figure out if those numbers exist or not. Then also that information has a fairly long lifespan. So do they only want to get information on the target? Or do they do things like psychological warfare, lawfare, trauma-based, mind control as well? Mind control would be a bit difficult, but the vast majority of activity that they do is location tracking. It's their bread and butter. It's the main thing that they try to do, and the information harvesting part of it is often directly related to it. But there is a certain percentage of time that they do other activities such as we have seen Attemptive Interception of phone calls or text messages, but that doesn't seem to be their primary goal in doing this. You could conceivably do if you did Interception of text messages or phone calls, get that information, but it doesn't seem to be their primary function and what they're doing. The primary function is to try to track the locations of people. And I imagine from their customer's perspective, that's what they need to know or want to know most of the time. And then if they do want to get more information, they don't just rely on SS7 or diameter, they may have other methods to try to get information from the handset or who's talking to whom. But for location tracking, this is probably their main niche that they see using these technologies for. And that is probably, they think it's one of the better ways of doing it because it's independent of operating system or location in the world. All right. You just said location tracking. We have you another question. Can the SIM jacker also be used to locate lost or stolen cell phones or other SIM using devices like a lot of cars up to this time or the new bikes? And is there restriction in the distance? That's a good question. So like I said, the SIM jacker attack depended on a specific library being present on the actual SIM card that libraries and present in the vast majority of the world's operators. It's mostly in, again, there's a map in that report, but basically south, central and parts of North America and then parts of Europe and Asia. So let's say you were in a country that did have the library distributed. It's not just cell ID. We did cover an report. You can actually request a sort of differential cell ID. You can get a sort of better location. It wouldn't be exactly the same as GPS, but it will be reasonably accurate. However, this is the problem with SIM jacker is that who's to say that you're tracking your own bike or your own car? I mean, you could be tracking somebody else's bike or somebody else's car and at point 10, you're doing location tracking. So it's something then I don't know if they, what the long term that they plan to do with these libraries in these countries, a lot of them are trying to, they put in place security. So now they can't actually detect or nobody can send these messages anymore. So it will be an option, but it's location tracking as a service wouldn't be something I'll be too happy with to see being sold commercially. Is there a way to check if my SIM card is vulnerable to a SIM jacker? Yes, there is. The good folks in SR Labs, they actually updated and released an application called SIM tester. That's free and it's open source. You can download that and you can check that against your SIM card. And that will tell you what type of SIM card applications are on the device and what's their security settings. From that you'll be able to sell them if they receive the text message, whether it would actually run against it or not. I'm back. The question was, by using the SIM jacker attack with STK command, is it possible to extract keys contained on the SIM card? So the individual subscriber authentication key? Welcome back. It's not possible via that method because you don't actually have access to the SIM. You only have access to a subset of STK commands. That's actually kind of related to a previous unrelated vulnerability, which was discovered in SIM cards. When a person mentioned again, Carcinol, he did some research in 2013 and he was able to get access by sending pictures to get access to the actual key to the SIM card. And when you have access to that key, you're able to access all the STK commands. So the good thing about the SIM jacker attack is you didn't actually need the SIM key to get access to a subset of commands. But via the SIM jacker, you wouldn't actually have access to that key. Although I say that in doing some of our attacks and doing some protesting, we were actually able to potentially exceed the boundary of the sandbox for the SAP browser library because we were able to break the phone or the SIM card at different occasions. So there was some a bit of leakiness. So it's highly, highly unlikely you get access. But if perhaps for the proprietary SIM card, an old priority SIM card, it may be possible to somehow escape the sandbox further and try to get access. But I highly doubt it. So let's see if we go from here. What data is or which data are the sources for your plots? That's a good question. I mean, that's the plots is really from our experience. It's not too empiric. The vertical axis is really about the amount of information that's required. And the bottom axis is really from our experience and really working with operators to see how easy it is and from their perspective to detect and block these. So it's more of just a rough guide. It's based on our experience and what we see. It's more a way just to easily visualize the type of attacks, the choices that operators have or the choices that attackers have when they go to do these types of attacks. And then because you can see evolution over time, the attackers, they much, much prefer to use the simplest thing. And they often have access to a person's phone number far easier than have access to an IMZ and their their serving cell address or the serving MSC address. So they would much rather prefer to do those types of attacks all the time. So we've only forced them to do other types of attacks because of pressure. And but if they get a chance to go back to original attacks, which I showed when they use a sort of global opcode variant, they will go back to it if they can. But not every operator moves at the same speed, not every operator has the same levels of protection. And so as a result, then they've got different choices, different places, but that source comes from from our experience. Previously, it had been suggested that it's easy to find unsecure SS7 endpoints on the internet. Is this not a source of connectivity anymore? It's very much, much rarer now than it has been in the past. They probably never said never, they probably do exist. To get access nowadays, like I said, it's those three options. A lot of it does come down to paying for access, because, like I said, you it's a money based system. And anybody involved in transporting any type of sizable communication, any sort of sizable interconnect traffic is going to try to charge you for it. So you it's getting access is the easiest way from there from those for surveillance companies perspective, and the most regular way, which is also important is normally try to pay for it. And what they try to do is to set up a, like I said, some sort of front company, or they partner by a different company, and then they work with a company who does something like IoT type services or SMS type services. They do something legitimate for a month or two, and then they might start to do auto types of traffic. So just finding a once you're doing it, it's it's quite quite rare these days. How is it possible not only to identify the types of attacks, but actually also the actual entities or surveillance companies? With a lot of work. We work with our with our customers. So a lot of research that's a very good question. How do you know the intent behind it? And how do you then put a name to that source? It doesn't come easy, but a lot of work. I mean, Dave, we have our intelligence, they have their intelligence, like I talked to you earlier, it's very likely someone who were on this call and watching this presentation. But we look at the type of people we are, we talk with our customers, they look at some of the sources of information. We try to talk back to original sources, we try to talk back to original where they come from, and then they try to see who they who they sold access to. Sometimes this information is for coming, sometimes it isn't. And doing that, then we can sort of put together a sort of a framework of what type of companies may have region being granted access and then who they work with. So it's a bit of forensics and then trying to figure out, I mean, who's talking to whom, like I said, some surveillance companies resale access to each other, like things get really confusing then at that stage. But it's with a lot of work. So we don't, it's not something we do lightly, and it can take quite about the time to to pin down, especially as they change. But that's, it's just within it with its research and intelligence. And do you notify people you found are being tracked? We don't notify people. Obviously, we notify our customers for the mobile operators, then they may go ahead and notify people. And that takes me to another point as well. Also, sometimes the sources as well, where does the tax coming from notifying them is sometimes a really unusual experience. Sometimes you can answer, sometimes you don't get an answer. Many cases, like I said, somebody's uppers may not be aware of this. And that can cause some unusual conversations if attack the activities coming from the network without aware of sometimes they may be aware of it and just not able to do anything about it. But from the people who've been tracked side, we would notify our operators or our customers. And then they would take that forward whether they would actually do anything about it. Like I said, in many cases, these are attacks that were actually blocking. So the information hasn't been retrieved, the person hasn't location hasn't been tracked. So in that particular situation, then they'll make a decision themselves about what to do next. All right. So is there anything that can be done to protect oneself from this surveillance? I mean, could I just use my old cell phone rather than a smartphone or is there anything else I can do except for the app you said for the SIM card checking? Unfortunately, on the mobile network side, not really. You can decide not to use text messages or phone calls, but it doesn't make a difference. You still have to register in the mobile network at some stage that data has been recorded. And this is what the surveillance company is targeting. And that's somewhat that's some of the more frustrating things. I mean, the most best thing you could do possibly ask your operator, sorry, do they have protection play? So they're looking for this. This also get a bit confusing, though, as well, because it's when something would have covered that more time when mobile operators make a decision on protecting their network. Most operators around the world are protecting their network. At least they're protecting their own subscribers. Things get more complicated then when it comes to possibly roaming subscribers. So you could roam from Germany to America, Russia or somewhere like that. And those operators there, then they have a decision to make whether they have to protect this person who came in, particularly because they don't have all the information about your network. And they may think that we come from Germany and now there's a message coming in from Italy, and maybe they have some sort of arrangement with each other. And so every operator around the world may not notice. And so this is why it's like most things in life is quite gray when you see a report of this person's being attacked over SS7. Well, that may not be a case that the mobile operator is trying to protect. It's definitely trying to protect its own subscribers, but it may not be able to try to protect or may not have all the information protectors. And in fact, some of that activity may actually be legitimate. And to block it could actually cause serious problems with it with the subscribers roaming Instagram network. So come back to original question, notice not too much you can do personally disinformation is stored, but the main thing is to to ask your mobile operator, what are they doing and what type of protection they have in place. And does the SMS exchange show up on the monthly bill from the operator? So can I see by the bill that these SMS have gone back and forth on the SIM Jacker attack? Well, first of all, very few people actually check their multi bills for SMS anymore. In that particular case, they actually had a variant, which is the one I showed on the wire shark, that they actually tried to send it out the message that is sent outwards via this global title, this SS7 note that was in the channel islands. And because that wouldn't be acknowledged successfully, it wouldn't actually show up in your billing records. So that's we believe one of the reasons that it was that there would be no possibility that that would show up in your billing records. The vast majority of networks nowadays don't have any billing for reception of text messages. So those receiving messages will not show up any bills. Yes, if you send out a message in those places that was in Mexico, for example, it may show up. But that again, most people have all your bills, all you can e type bills are not even going to see this. All right. So that would be the last question for now. I don't see any other message incoming. So thanks for our signal angel, Vanny, she did a great job in sorting the questions and providing me here with a real good support. Thank you back on all the people from the video operation. And of course, thank you for this really interesting talk. I had a really interesting hour here together with you. So, Cahal, let me thank you also on my behalf. And I hope to see you again on this topic. And I think there's still more to come. Thanks very much. Thanks for letting me speak.
Every day, surveillance companies attack mobile networks, attempting to track the location of mobile phone users. We will analyze, using real-life data, these surveillance companies’ tactics and show the different ways that users are tracked in the wild over 2G, 3G and 4G networks. For 5G, we describe the critical functions and information elements in the core network that might be targeted by these attackers. Mobile core signaling networks have been known to have exploitable vulnerabilities for several years. However very little information has been presented on whether these vulnerabilities are being exploited in real-life or not, and if so, how it is being done. This presentation will give first-hand information about how location tracking – the most common form of mobile signaling attack - is being done over multiple types of mobile networks in the wild today. We will start with briefly introducing mobile telecom networks, their known security flaws and how surveillance companies exploit these flaws. Surveillance companies are success oriented and have a toolbox which they use for location tracking of mobile phone users, which is the most common attack. Based on real-life experiences we will describe what “tools” we see in the wild and how they work. We will also describe how attackers optimize attacks based on the target network and technology, and how attacks have changed over time since some mobile operators have begun to put in place protections. We will also show a visualization of how these attacks can happen. Finally, we will make a projection for 5G core networks, and how they will also be targeted by surveillance companies as they are deployed globally over the next few years.
10.5446/51939 (DOI)
This is our next talk coming up. It's Sofia Tsele with the state of digital rights in Latin America. Sofia, up to you. Thank you so much. Thank you everyone for tuning in today. As it was said, my name is Sofia Tsele and I'm a cryptographic researcher mainly a cloud developer. I also led the development of the author of the record messaging, critical needs version 4 and on my spell time I also researched about the state of the digital rights in Latin America and specifically I researched around the usage of digital tools to enhance gender-based violence in Latin America specifically. But today, I was wanting to talk to you about the state of digital rights in Latin America and this is a really big topic so I will try to just cover the basics of all of it. So don't expect that much in the depth of the topics that I'm going to talk about is more of an outline so people are more aware of the state of it. So let's start with one of the most interesting questions we're talking about the state of the digital rights in Latin America to actually talk about what is Latin America and for some people maybe on the audience this would be really simple to just because people already have a preconceived notion of what Latin American is but actually to Latin Americans or for people living in South America or Central America or the Caribbean or the South or North America this is not an easy question because there's a lot of questions around when actually Latin America was created as a concept and what actually means as a concept to be a Latin America. Does it actually mean to have some certain cultural background that is shared between each other or what does actually does this mean and in actually in certain parts of what some people will actually think is Latin America some people actually don't think they belong to Latin America because sometimes they're associated to be part of Latin America to have a background of being colonized by European powers and also have an indigenous background and that's the racism in Latin America as everywhere in the world. Certain people in Latin America that think themselves as white don't want to belong to Latin America because of the reasons. So speaking of Latin America, San Ocean is a controversial topic in itself in Latin America itself but on this talk I'm going to speak about and be referring to the Latin American region specifically to countries belonging to the to South America to Central America and the Caribbean and to the South of North America mainly to Mexico and when I talk about Latin America as a region I'm not including French Guiana because that's part of the La Faze de Autrénée and it's part of the Cheyenne area. So that's the first question that now sort of have been answered at least for this talk and to actually give a little bit of background of why I think it's actually important to talk about the state of digital rights in Latin America is mainly one of the reasons why I wanted to give this talk at CCC was mainly because sometimes in this kind of conferences we hear a lot about the state of digital rights mainly in the global north but sometimes we're oblivious to actually what is happening in the global south and it's the same kind of a state or the same kind of ideas and notions that we have for digital rights apply the same to the global south and in this case I'm just going to be focusing specifically on one region of the global south mainly Latin America but there will be needed actually more research and more actually people talking about other regions of the global south and to give a little bit of historical context why it's actually important to talk about digital rights in Latin America I was wanting to actually start with this specific historic instance of something that happened in Latin America and the reason why I'm going to highlight this specific instance is because sometimes we think that this kind of surveillance or this kind of censorship of the kind of of the kind of going against human rights only happens sometimes in the global north because we sort of associated that the transgression between the transgression between digital rights only happens between spies and like big countries but that's not actually the case I mean this historical context I'm going to show you why it also happens sort of in Latin America so what is the scene the scene right now it's the cold world the cold war and this is basically this historical instance in which there were two superpowers basically fighting for economic political and even cultural I would say around the world and there are two superpowers with the Soviet Union and the United States of America during that time during the 70s and the 80s Latin America sort of elected certain socialists and I put socialist in quotes socialist governments to be the representatives of the people and I said socialist because sometimes we have another specific notion of what socialism is but it was a specific notion of socialism in the latin american context so these governments were actually elected but certain other powers that exist also in the americas didn't actually liked that these governments were elected or even they didn't like what happened in kuban meaning the Cuban revolution so in order to prevent for all the latin american countries turning completely socialist what happened is that the united states backed a campaign of political repression and state of terror involving intelligence operation and assassination of opponents mainly in latin america and in fact the deposition of many of the socialist governments that were elected in latin america and this backed campaign of political repression happened in several countries in latin america mainly in argentina chile uruguay paraguay bolivia brasil ecuadora amperu and the involvement of the united states was not only about backing them economically but they actually provided planning military cooperation training on torture technical support and supply military aid and this happened during their administration of the united states mainly during the administration of johnson nixon paul carter and megan this not only happened between corporations between these latin american countries and the united states but it's also happened between the same latin american countries so the same countries that i just mentioned also cooperated between each other to provide themselves with actual good planning on how to efficiently torture and efficiently kill political opponents to the regimes that were installed by the united states against the socialist governments during the seventies and the eighties not much was known actually and how actually these operations were actually carried out of course it was known that a lot of people were killed and a lot of people were disappeared and a lot of people were tortured but it was only until december of 1992 that martin almada and josea gustin fernandez drove to an obscure drove to an obscure police station in the suburb of lamarre in asusion in paraguay and why they drove into this station was because a whistleblower actually sketched out the plan or where this police station was so they drove to it and what they found is that they found a cache of seven seven hundred thousand documents piled near to the ceiling of something that later was going to be called the terror archive those achievers del derroh which was a complete paper database of the interrogation of records of integration torture and surveillance that was conducted under the military dictatorship of afrello strostner which was the paraguayan dictatorship during the military intervention of latin america what they found out is that mainly it was used in four months telephoto cameras watch apps to peel this paper database of everyone that was viewed as a threat and i highlight here that was everyone that was viewed as a threat was not actually only people who were actually deemed to be actual political opponents to the military regimes during those times but it was mainly everyone that they thought were one that they thought was actually uh some kind of threat it could have been an artist creating art that was against the regime it could have been students that for some reason decided to read car march so all of these people that somehow were associated with socialism or somehow were deemed to be opponents of the regime in that time were targeted as threats and it was not only them but also the friends and their associates so they created a database of not only the people that they think they were a threat to the military regime but also of any kind of friend or any kind of family that they had the archive was a total of 60 000 documents and where it fought to us and incompromise 593 000 microfilm pages the result of all of these and of all of the database and all of all of these torture and all of these killings was that up to 50 000 people were killed 30 000 disappeared and 400 000 were arrested and imprisoned and until this day if you go to any of the countries that I have just mentioned you will see several instances and several families of actual of people um or relatives of people who were actually disappeared and until these days there is a lot of countries in which a lot of people are still asked for the bodies of the people who disappear or at least to know what happened to the people to disappear so while speaking about this the reason why I'm speaking about this is because we sometimes think that these things don't happen at this level that people don't get disappeared because there exists a paper database of you that that showcase yourself as a threat for some reason for whatever reason just because you somehow read some book that someone deemed that it should be unacceptable to be read during that time which was what happened in Latin America about that time and the other reason why I'm showcasing this is because sometimes we think that countries only surveil their own citizens well this is not true it was even true in the 77 years the several different countries that had military governments at that time were surveilling the citizens of other countries and these countries were sharing these databases between each other and the other reason why I'm highlighting this is because this was not like a stall or solitary operation that was happening because these Latin American countries decided to do it this was backed out and actually planned by the United States this well known and the victims of the torture have actually stated several times that for example that they when they were being tortured there was always the presence of our united states person in the room actually training people on how to efficiently torture training people on how to efficiently track people training people on how to efficiently create these database what will happen today today will be that this kind of paper databases that existed on that time will be much more efficient because people are using the digital tools every day and it will be the same that they will be backed by some power some more powerful country that will have an interest in the economic and political scenario of several different countries so today there's much they will be much worse but the reason why I wanted to highlight this is to actually show you an historical instance when indeed this happened and that it could indeed happen in the future as well so what's the theme when talking about the digital rights of latin in latin american countries in this talk I decided to talk about four main topics that I think are really important with talking about digital rights in a region the first one is to talk about what kind of privacy laws exist on those countries the second one is talking about the types of cohesion that exist in these countries by the use of digital tools the third one is to talk about the state of civilians of those countries and the fourth one is to actually talk what kind of secure communication are actually provided in those countries so let's start with the first one which is actually the privacy laws and on this since the arrival of GDPR has been like a hot topic for now some years but indeed this is not something that just was just created by the arrival of GDPR but indeed for example in the human rights it's actually defined privacy is actually defined as a human right and because of this most of latin american constitutions actually provide some certain kind of protection towards privacy because it's part of the human rights mostly in the past the constitution have was only focusing on privacy as far of the non-digital world but as we the arrival of the digital world privacy is also applied to the digital world since 2010 world where 62 new countries have enacted data protection laws and every countries as I have said in latin america has some form of private data protection in latin america the first country to actually have a privacy law was chile in 1999 followed by uruguay, mexico, peru, colombia, brasil although in brasil i would put an asterisk because even though it was created in 2018 some parts of the law still needs to be debated um august 2021 and some parts of the law were actually vetoed by the current president and there has been some pushback around that law for a while now other countries that have actually a privacy law are barbados and panama one of the question that you can ask yourself when actually talking about privacy laws in latin america if this is actually a phenomenon in itself or it's just like something that has been inspired because of the arrival of GDPR in most of the cases that we have actually seen it's actually mostly inspired by GDPR because it's mostly what is right now in fashion so to say um but it's actually needs to actually happen still a lot of research to actually say that this is a phenomena of their own or that these privacy laws are actually thinking of the reality of latin america on their own and not just taking some other privacy laws from other countries so the first question that actually needs to be addressed when talking about privacy laws in the latin america and concede context itself is to talk about what is privacy itself and this is something that now scholars and researchers are actually talking much more around in the sense that even though for one region privacy will mean something because of the cultural background of this region for another region maybe privacy will not be deserved the same because of the different culture meaning of privacy depending on the region or the world that you're talking about so for example something that still needs to be researched is what actually means uh what mean what privacy actually means for a latin american context what does latin americas actually think of privacy itself the other thing that actually um gets a lot of talking when you're talking about privacy laws is what actually applies to privacy if privacy is only the contents of communication that gets the status of being private or if anything for example the network transmissions are actually also part of privacy and this is something that's just um been talked about the same when we're talking about privacy laws in latin america actually thinking about what what data is defined as private as i have said and what about metadata or subscriber data and what about linked data metadata is usually what the status uh they usually define it as like auxiliary data that gets created during for example having a conversation subscriber data sometimes they define it as the auxiliary data that the initiator actually created and linked data is more about this data that can actually be linked to an individual and use for example for advertisement several of the different laws in several latin american countries have different uh approach to all of these um definitions but most of them are mainly inspired by the same gdpr definitions what about anonymous data certain of the privacy laws indeed have uh in latin america indeed have some kind of mention of anonymous data i think in the brazilian one for example they say that anonymous data should be pri should be private if by some reason for example you are able to de anonym de anonymize the data by using the same mechanism that the service that anonymize the data uh use you manage to de anonymize it and then the data is also considered private but if like for example you took the data um created by some service and you later by using a completely different mechanism that the one provided by the service you managed to de anonymize it then that that data is not private anymore and this is sort of mentioned a little bit like in a big terms but at least it's there um so i just wanted to highlight these questions in the sense to actually say that most of the time when you read the privacy uh laws of different latin american countries the sort of notions that sort of vaguely define a sort of vaguely uh mention but not that much going in depth with the different notions one important thing to actually think about when talking about latin american privacy laws is what about the data that that gets sent or transmitted to other countries because one of the realities of latin america is that there are some services that provide uh that provide certain kind of services in latin america that are latin america in itself meaning that they are like uh companies that were created in the latin american countries and where the data is stored in the same country but most of the time in latin america are the consumers of applications mainly use social media that is created in other countries and where the data goes to another country or communication systems that go to another country most of the privacy laws in latin america latin america actually says that actually says that if the data gets transmitted to another country that these other countries should have the same level of protection as the one that is given in the in the owner's country but even though this is said in most of the privacy laws in practice this actually does not happen and until this day i have not actually seen anyone actually asking any of the companies that we use for social media or for communication to actually surrender the data or to say that they are not applying the same level of protection that latin american laws provide for their own data so that is still it's that's on the law but in practice not much happening um another question that you should ask yourself when talking about privacy laws is what about the data that is already stored so one of the things is that even though the data uh the privacy laws exist even though they existed for some long time um still certain countries and certain companies in other countries have been storing this data for a really long time and we don't know uh if there will be at any point any kind of request to actually say yes to render the data of all our citizens that you have uh stored for this amount of time that hasn't happened so far now let's talk about the second topic that i was uh outlining in the introduction the second topic is the times of cohesion what kind of cohesion actually exists in latin america in latin america in regards to digital rights and in regards specifically to freedom of expression of the current state of what happened in covid 19 and about gender-based violence by using uh digital tools so the first thing about the health system and covid 19 so in this um this is a really complex topic but i will just try to highlight certain points uh about instances in which uh digital tools are being used to actually diminish the rights that people have to other things specifically something digital tools have been used to diminish the right people to health to the health system and to the education system so in one instance of this happening for example is by the usage of biometrics which is the on a rise in latin america specifically in chile and in brasil um one of the proposals for example for chile was to to use biometrics uh during health checks meaning that you will have access to certain public systems and some and to certain to certain to certain health checks only by surrendering the biometrics of yourself and this actually doesn't apply to the latin american region in the sense that for example in the rural areas most of the people that were asked to surrender the fingerprints because they work the field or because they sometimes have some kind of disabilities the fingerprints could not be read because there was no fingerprint or because the fingerprint was really difficult to be read what this means is that the health system became unfair in itself because the people were not able to access a health system anymore because they could not really surrender the biometrics because the idea of biometric didn't match their current reality and this is uh something that we will talk during this talk quite a lot that sometimes in latin american governments they take one technology because they think it's cool or is what other countries right now are using but they don't think of what it means to actually apply this technology in a latin american reality not an instance in which biometrics are actually abused in latin america and this is an abuse that is happening not because of latin american governments or maybe it is in a way but mainly because when latin americans have to travel to another country for tourism or for working or for whatever reasons they usually have to apply for a visa for the shengen area or for the united states and most of the times they are asked to surrender the biometrics at the time of applying for the visa or sometimes at the time of crossing the border it's not really known what happens with the biometrics of the people who are being asked to surrender them for the visas or the legality of actually of all of this it's not known where is this thought and in some instances even uh when you are surrendering your application for your visa some surrender in your biometrics you're also asked to sort of surrender your social media um in the sense that most of the times uh in order to be guaranteed to be given a visa you have to sort of know that your social media is going to be examined and it's going to be checked that you don't have anything that that country that you're willing to travel to deans um but coming back to the health system uh sometimes in the health system for example uh the reason why for example latin america and use it social media quite a lot is because sometimes the social media is the only option do they have and if you will see in a few slides uh that i will talk about this in the future that sometimes because uh latin america has an unfairness between what kind of discourse gets pushed in the mass media what happens is that minorities usually go to social media to actually try to put themselves in the discourse or to actually have to put from other people or to showcase what kind of fairness is happening to them which is um which is bad in itself because this is the only way that they can publish what is happening to themselves and at the same time they are surrendering surrendering their information to social media companies that exist in other countries so in the case of the health system for example this happens quite a lot uh with people who are who are h.a.d. positives in the sense that because they don't have a support from the local governments from the local latin american governments they use social media to actually find psychological support or to find any kind of support or or even to actually find any way to buy the medicine that they need because they work when the government is not providing the medicines that they need what they do is that they have created their own market in which they can buy the medicines that they need because they work when the government is not providing this what in reality this makes is that there's some there is an unregulated market for medicines and there's a distortion and of course there's data that has been stored in the social media of these companies that exist outside of latin america about the health status of the people suffering from from this the same happened during covid-19 in which mainly all of the health system of all latin american countries collapsed and what happened is that there was a lack of medicines and people started selling medicine again through social media or oxygen tanks or other kind of ways that's the only way that people could actually sometimes get medicine through the use of social media and this again poses the question that this was the only way that people could actually access the health system and at the same time that they were giving away the private information and the health information to companies that exist outside of the latin american countries because it was the only way in which they could access some kind of health checks just to give you an example in a specific example and studying 2019 showcase that in equa dot 10.7 percent of people between the ages of 15 and 49 are defined as digital illiterates and only 41.4 percent has access to an smartphone and there's more access to smartphones and in general to the internet in human areas and there's more access to smartphones from men than from women and other genders. This is really important to highlight especially when talking about covid-19 because one of the proposals that the latin american governments have had is that in order to know how much people have been infected with covid you should install an application on your phone that actually tracks your movements and this of course has privacy implication for your data around your location but it also doesn't really make sense in a latin american scenario because most of the people don't actually have a smartphone so actually asking people to install an application for covid-19 when they don't have a smartphone or asking people to start an application to for example a schedule health checks doesn't make sense in latin america and just makes it why the the unfairness of the health system itself and this is again an example of the ideas that sometimes local latin american governments have that they just see that other countries are actually creating applications for tracking the covid-19 but it doesn't really make sense into a latin american reality and so yeah. On another topic sort of related to cohesion and unfairness is gender-based violence so when a study of the world health organization in 2002 showcased that the gender-based violence is the main cause of that of women in the world and that 23 percent of women worldwide have reported some kind of digital gender violence for example in the latin america concept itself in ecuador every 71 hours a woman is killed and if this women that get killed in ecuador is not because someone for example was trying to steal something from them but they were killed because they were women so this is often a time school afremicide latin america countries that are pretty much sexist so as i said most of the minorities in latin america don't have a way to express their struggles or to gather support through mass media or through the or through the discursor to the main discursor happens in the countries so what they do is that they switch to social media or any other kind of digital tools to sort of empower the struggles that they are living but what happens is that because they have been using social media or they have been using digital tools some other groups have actually targeted them through the internet and actually surveilled the the usage of social media they have they have created a strategy for hate speech on authorized discrimination of intimate images, cyber sexual harassment, trafficking, instirpation of their identity and censorship so if you talk to any person or activist who's actually working on gender-based activism or an lgbtai plus activism you will see that they are always targeted by the usage of social media and there's actually on some instance that i talked to some activists there was actually a showcase that right-wing parties and right-wing people actually create training on how to how to actually harass activists by using social media one important case to highlight here and it's again from ecuador is the case of juliana canpo verde which is a woman that was killed by a christian pastor and what this christian pastor did was that he hacked into the social media of this woman to actually try to convince the family that she was not disappeared and this was actually at the beginning not even taking into account by the court but later by by actually by pressuring it it was actually taking into account as i said mainly the people the minorities who actually try to be actively actively fight for the rights of women or lgbtai plus a in latin america get a lot of hate speech they always get the stress sharing of the photography monetary comments in reporting every post they send them masturbation videos of men and what also happens and in one instance that i was helping one woman for example all of their photographers were taken from the social media and it's a specific man created art based on them and that was a sexual of sexual niche nature of these uh photographers that this person took from her account and that happens all the time and most of the time when women or other groups actually try to publicly say that this is what is happening to them or they go into the judicial system to say that this is happening to them and that is unfair and illegal most of the time they are diminished because they say that this is just hysteria of women in the cases of grooming um this specifically happens in latin america for teenagers and for children and what this research has shown is that this happens through the usage of social media for teenagers and the usage of games for children one important thing to note here is that in most global north countries uh it has been said that one of the reason why we should break into an encryption of certain secure messaging applications is because they're trying to find child predators that have uh or child sexual harassments uh that operate at a level that sometimes is international and while it's true that sometimes they find the sexual predators um what happens is that this person gets locked or something similar but not really a reparation for the people who were harassed in global south countries so yeah and as i said um there's massive attacks against uh people fighting for rights specific to gender uh in one instance for example we know that they were even giving like training sessions for how to censor uh people activists that were working in gender-based rights for example there was one instance in which they were given seminars on how to efficiently target and how to efficiently um send like memes and create spectral imagery of these cell phones so they will silence themselves in their activist plans so the goal is to with the silence and intimidation in the specific context of domestic gender-based violence it doesn't happen as it often highlighted in a global north that it happens often by installing for some more in the cell phones or in the devices of the victims that doesn't happen a lot what happens most of the times is that the password uh get the stolen by either by cohesion or either because the person sort of guessed um or what also happens is that most of the time in domestic abuse cases what happens is that the perpetrator will take away the devices of the victim that will take away their computer or their smartphones or whatever they have instead of uh to isolate the victim so it's not much more about installing malware as it is in global north countries but in this case if there's a lot that is actually missing and there's a lot of research that is needed around the topic there's a lot of research to actually call these as it is this is actually uh targeted surveillance uh even though some people have actually not call it that way because they say that this is just uh this is just activism in target this is just activists that are complaining too much or this is you know the state of the world in which um obviously if you're a feminist you will get all of these targeted um targeted harassment but that's not true this is actually what the research has actually shown in latin america is that this is actually targeted harassment around groups so there are seminars who actually train to how to efficiently do this there are facebook groups and private what's what's a group with in which the social media of uh of activists is shared and people are encouraged to sexually harass or harass in any way these people um it's something that is missing that is an interesting research to actually do is to actually find out how the campaign of the more the homosexualization clinics happening in latin america they are usually happening in over social media what are what is the homosexualization clinics it's basically some clinics that exist in latin america that they say there are clinics for uh drug uh drug related issues so you recover your drug addiction but in reality what they are is that there are clinics in which lgbti a plus uh people are actually kidnapped and tortured until they become straight what actually means becoming straight i don't know but they are actually tortured in many cases they are raped and this happens all over latin america and the way that uh these clinics that showcase or advertise is to the usage of social media so it will be really interesting to show to have a research around how what actually are the strategies that these people are making by the usage of these digital tools and the same for abortion clinics and i put abortion clinics in quotes because abortion is mostly legal illegal in latin america with deception or or away and since yesterday in argentina um but what they do is that sometimes in university forums in social media they advertise these abortion clinics which are not really abortion clinics but are like just um places where women go to think to have a safe abortion but it's actually places where they either convince women to not abort or like try to kidnap them so they don't abort um this will be an interesting research to actually do and if you want to know more details about gender-based violence and how it's actually executed in latin america have an upcoming upcoming talk in enigma around specificities but as we've gone to another topic and the topic is civilians as i said uh when talking about activists uh women's activists and lgbt ai i an lgbt i a plus activists and um uncertain of a different activists i have already said that these kind of minorities get all the time uh targeted civilians so many people don't want to call it a surveillance itself it is in itself surveillance of these groups and targeted harassment of them but uh to the most classical civilians that people have in their mind the surveillance that is either mass surveillance or targeted surveillance or political opponents and uh human rights activists let's talk it now in this point so about the surveillance it's also a question that has been asked when talking about privacy laws in latin america and one of the interesting debates that happened around the time was what is surveillance if it's only like um if you have to dot up someone and you're reading it as a human is that surveillance or that also includes like for example if there's machine reading meaning that if it's on both gathering your information and later creating advertisement based on it is that surveillance or not and there's still some ongoing debate around that i really like the definition made by esf in this report which is really good so you should already if you're interested in which so it's not surveillance not only about private communication reading private communication for another human being but also collecting monitoring intercepting analysis in it using preserving and retaining it so and what i like about this definition is also that it says past present or future because surveillance is not only about what is right now happening but also how much it has happened in the past in latin america and the reality itself latin american government's often time allows some form of white happy nor civilians in the face of crime in many of the definitions that i have found they allow in the face of crime if this is a serious crime if it's terrorism or if it's 28 an investigation what exactly is a serious crime is not actually defined or when does a crime become serious what actually defines terrorism itself sometimes is not actually properly defined or when to add an inmate in add an investigation is actually a good case for actually having white happy it's actually also not really defined national support and means of civilians is not really there so for example there's not really an instance in when they say that more installation is allowed but in practice as you will see in a few slides what actually occurs and most of the times when there is more whether it's been installed to target political opponents or human rights activists the softest times more were on software that is sold by global north companies which opens up many questions that i will cover in a few minutes in the case of location tracking also some kind of access is provided depending on the latin american country in columbia for example just to give an example it is required that telecommunication services um the two main ones in columbia claro and movistad and that will provide us hand over location data authorities in a photo it's actually really easy to get location data about someone it was an incident for example if you go to the police and you say that someone is disappeared they will actually turn location tracking for that person in order to find it pretty easily um but sometimes that is oftentimes found in most of the laws of latin america is that well it says that authorities can't get access to location tracking there's no clear distinction around which authorities actually have access to this location tracking or even more if the authorities share between other authorities how is this sharing actually happening that's not really defined in the case of russia columbia chile mexico peruano on duras they all have that retention obligation which requires that they lock vast amounts of data about the users and provide provide low informants access to it if they need it in the case for example of uh tweed and investigation of of of serious crime um in practice this is really oblivious on opaque to the user sometimes there's sometimes some kind of notification that says that you are using a service this data is locked but most of the time the user don't really know that this is indeed happening and if you are on every day person in latin america if they know that their data could be potentially used later for to aid an investigation or that it could be given the authorities most of the people will not really actually know that that's indeed what is happening and again there's no clear distinction around which authorities have actually access to it let's talk about in general about the question of surveillance if there's actually mass surveillance in latin america what it has been shown that it in contrast to the global north there's not this traditional instance of mass surveillance in the sense that it has not been really that much leaked documents that showcase that there's a plan from the government so actually to be allowed the citizens at this scale um but there has been very much instances of uh target surveillance um it could be and it's it's mainly reasonable to say that there could be mass civilians but at least some so far we have not had that much of an example to see some notable examples of actually target the civilians of the software installed by colombia that's called esperanza and the other one that was sold by very insistent that is called puma and this was an spy software um for telecom communication and it was targeted towards journalists political opposition parties human rights activists and this was all done during the alvaro u.s. government um in colombia most of the time the software or the reason why they try to justify that they bought this malware and they bought this software is because they try to help anti kidnapping operation anti extortion operation anti terrorist or anti drug trade and this is a common thing all over the world that the reason why they say that they break into an encryption or that they need to buy malware is because they're trying to a criminal investigation or some kind of a center for global security studies monkey school of global affairs actually we build existence of command and control servers from team spies the government's international team species remote nutrition civilian software in mexico and panama and venezuela and paraway and of course the spy was developed by the german uh based company and the same happened in with the hacking team in other countries mainly in brasil colombia chile quadro and duras mexico and panama and it was the same it was mainly targeted political opponents it was targeted the human rights activists it was targeted for journalists um how it was bought that raises a lot of questions of why the global north countries and these companies were actually selling this malware to governments in the legality of it these companies uh couldn't actually solve it uh as it is um so they try to find a legal way to actually sell it the way that they found it by using a intermediate and terminate intermediaries mainly for example in the case of ecuador there was a company of a colombian-based company that was the one that was uh buying the software from the from the global north companies and then reselling it to colombian ecuador another question is if it was safe to use in the sense to assist this malware really doing only spying on the political opposition or whatever they wanted to spy and it was not backfiring and also logging data of the latin american governments or the of the people that were inspired as we know now it was also doing that so there was not even a notion of the governments that bought this of what actually they were buying um what about the reality what about actually buying another software uh from another country to actually spy on your citizens um that raises a lot of questions in the case of secure messaging what we have found is that there's not a lot of use such of actual secure messaging in the sense of traditional secure messaging application that exists today in the case of signal on the case of wire um many people in latin america use social media is their activists they mostly use social media because that's the only way as they don't have access to mass media the social media gives them a way to access um to have a way to publish what they are doing in some kind of way and the the way that for example they operate how they are going to create meetings of what kind of strategy they are developing is through the usage of whatsapp groups there's not much use of signal that i have no no no because it's not so known because it's too slow um specifically when i have spoken with journalists in latin america they sometimes have told me that they don't use it because it's too slow because people cannot really send documents to signal because it will take forever because it's too difficult to use and people say it and mostly because contacts do not use it so most of the people don't really use signal and so therefore they don't really use it if your family or your friends don't use it then you will not use it also something that is really used is telegram um because they have sort of a marketing campaign for um latin american activists so most of latin american groups actually use telegram all the figure message and communication systems are hardly used you hardly see people actually using otr or pgp because they are much more difficult to install so not really that much used um and what often happens is specifically we are talking to activists about that uh fight for indigenous right is that they often don't have uh enough economic uh advantage to actually buy lots of smartphones and put signal into it or buy a lot of computers what they do is that oftentimes they have one desktop computer and they share it between each other because that's what they can um so in this case there's another really good solution for secure messaging for them there's one instance in which you can actually say that um there's a country who has actually asked um what's up to actually surrender encrypted data of someone to aid an investigation and this case it was for a sale between 2015 and 2016 and because what's up refused to provide this data it was blocked uh during that time which to this day there's still some debate about the legality of actually blocking it or not so as i'm running out of time let's go directly to some conclusions so in conclusion there's still a lot to actually think about latin america in terms of the digital tools and the privacy of them and the surveillance that you can actually make of them and there needs to actually be thinking about latin america and context and what means privacy to the region itself instead of just just thinking that we need a law so we're just going to just you know copy some of the same concepts and put the same into the latin american concept and all work the same the same in the context of covid that even though it's a nice idea maybe to half tracking applications for covid 19 if their privacy is preserving of course um but maybe in the latin american concept it will not make sense because not a lot of people have actually access to one smartphone that they care on all day is there still a need to think about the data that's consumed by other countries um so mainly because the most of the people in latin america use in social media that is developed by other countries and mainly they're using facebook and whatsapp what actually means that the data gets stored in another country we need to think about the malware market and why certain companies from global north countries that actually sell in this malware and what does it mean and like what legality has started actually um what we know is that malware is mainly used for targeted civilians specifically uh for journalists but it's highly used for digital gender based violence i know that there's a lot of studies from the global north who have malware is actually installed for spasware and all the kind of this but it's not that much used for gender based violence in latin america in latin america it's mostly about hacking social media accounts or like um restricting the access to the device we need to more research to actually know how digital tools uh how caravans over digital tools actually impact the minorities what actually means for example for um the women's rights activists or lgbt ai plus activists to actually be hard be harassed all the time and monitor all the time uh their activities uh through digital tools and we also need to think is there is a secure messaging solution that actually works in latin america for activists that sometimes don't have the same level of access to technology as they do have in the global north or the same access to the to the internet in general the connection of the internet as in the global north more details around all of this will be discussed actually at the panel that i will be inviting you now or um to happen maybe in january and february and to actually talk to people who have been doing all of this research um around what actually of the finding sara and what we can actually do to actually think about and a specific way to create a secure messaging solution or to think about privacy in a latin american context and just some time of reference if you want to relate about privacy laws and put some links here the same for surveillance and the same for cohesion when it's the cohesion is covid domestic abuse hate speech gender-based violence and all this um with that thank you very much and i don't know how the questions work but yeah thank you very much so thanks a lot for that really interesting presentation um there is still time for for some questions here use our rc jet it's linked in the jet tab below in your video browser here um you can also go to twitter and master on rc 3 i fan csh would be the hashtag um so far let me quickly see yes no questions but i have a couple of questions so in the meanwhile folks if you're interested send us your questions they will be collected and forwarded by me in the meanwhile i have a couple of questions let's say um to you until the others come up with their questions you said um that polis treats females which have been harassed when they go actually to to indicate that there's something really going wrong as historical beings so the question is for me um that's a typical say male reaction so what's the the distribution between males and females in polis in jurisdiction like for example um judges uh lawyers and so on is a force so i suppose there is a certain in uh disequilibrium this equilibrium yes there is not a balance between the number of women in the police and in general in the judicial system in latin american countries even as far as going like for example women presidents in latin america hardly a thing i can think of an instance in argentina and chile in brasil um in my country has not been an quadrion president although it should have happened because there was actually an Ecuadorian woman who could have been the president but they didn't allow her any help but yeah there's always a disadvantage about it which actually asks the question if there will be women more women in the police or in legal authorities to actually believe women that could certainly be the case but it could also be certainly the case that even there were women sometimes for women join mostly male dominated fields what happens is that they um integrate into the same pattern of thinking so they get more so they get more easily uh more easily so they feel more is it is that comfort uh on the work that they are doing so that could also be that even there are women that will be still treating other women reporting bugs because um they have integrated into the same male thinking just to preserve the job okay well that's that's a sad story obviously i mean that's exactly the way it shouldn't work okay we have some uh questions flowing in so the first one is so what is the reason for people not thinking about their own privacy what do you think about so in general i think that in latin america there's still in digital tools there's still not this notion that you are actually being surveilled and that the data that you are given to companies actually have some kind of um cost or that it has some kind of uh yeah that it has some kind of cost so what people think is that they only use social media as they will be using casual conversation they think they equate uh social media with that like for example social media is the same if i'm having a casual conversation with someone meaning that after i finish this casual conversation it will go to the air and there's no record of it so there's this notion of that there's still some need to actually say no this actually is almost as writing something into a paper because it has the same level of storage it's it's not like only talking it is the same so that still needs to be actually pushed into the agenda maybe in the same education system to actually say the usage of social media or the usage of digital tools has indeed these actual implications and this feels like to happen i don't know if it's specific to latin america certainly latin america happens but i think it's also in the world that people treat the usage of specifically social media and what's up as if it was like you know casual conversations but it's not actually the case great thanks for that elaborate answer um oh wow now it's good it's going on that's good we are having the questions out um can activists operate properly uh under the circumstances that they actually know that they're being spied on there have been some instances in which um activists have been actually targeted and sort of being aware of this the sort of journalists who actually been aware of this and in Ecuador for example what they started doing is that they were started collecting the malware that they were sent um so there's like a little collection of the malware that was sent to them and they sort of realized this but there was no way for example they they sort of figure out this later um at the time they were already targeted because most of the time people don't have access in latin america to anti-virus software because it's really expensive and your company will not pay for it or sometimes you install your anti-virus software from up uh buying it in an illegal way the license in an illegal way that happens a lot in latin america um so there's no way um in the case it's also sometimes when government actually targets um political opposition or activists they sometimes they don't know exactly how to target so there was one instance in one there was one judge in Argentina who got killed and when people analyzed his phone they found out that he had malware installed but the malware was not functioning because um because the malware was only to be used for a computer for a desktop computer but not for the phone so even the government sent this malware to try to target him but it was not efficient because it was not for the purpose that they were wanted so sometimes even when governments buy this malware uh they don't know exactly how it works so they just only send it is there a way that they can keep working there's some ways that they can still work but most of the time what activists in latin america use is mostly they use social media and whatsapp and they get infiltrated quite heavily in those groups um so they can track most of the times what they do is that they have uh have now created like a strategy on how to efficiently use social media and how to efficiently use whatsapp and if they actually need to come up with certain important decisions or certain important meetings what they do is that they gather themselves in person rather than using any kind of digital tools so that's like a common thing that they prefer to meet in person rather than using any digital tool okay yeah okay um there's a question actually about this panel you've mentioned um folks out there are not quite sure where how to find it so basically if you can comment on that that would be cool yes so i'm still thinking of exactly where it's going to happen uh there's yes two happen people from the from article 19 from charisma from e f f i did something similar on our event about secure messaging but the west speed activists from Hong Kong and this one is going to only be focused in latin america um i will put on my twitter because that's the way i put it um i usually send it also to mailing lists last time i also send it at the tour mailing list um but the people have an idea so what are the mailing lists i can advertise this to i can also send it along it's very it's very in a hack away in the sense that it's very self created nothing like advertised by the company so yeah what to know about it get in front of us uh just tell us your twitter uh my twitter is uh claus says the way uh yeah just tell us yeah there it is uh perfect except that's the one we need that's the one we need very good um okay let me see we still have a little bit of time left now um maybe another question you were talking about data retention in several countries of south america um are they usually then requested by judges or do you have some some information that this actually flows by some other weird let's say channels it flows by a lot of weird channels and specifically sometimes police it has access to this uh that that gets um stored if it's important for a crime um i don't know how much for example in companies and that's a question to us in itself in for example in the companies who have access to the data that's being stored by the same companies if there's like some kind of internal um internal guidelines of how actually who has access probably not uh but usually the police is the one who has access or the or if it's in a serious crime and a crime when i say serious crime i mean something that it gets more like public attention or mass media attention um usually it's the prosecution usually is the judge those are the ones who have access okay okay um yeah we also got some some tech comments here basically so one person is saying well would it be would it be cheap disposable life uh usb or less uh be of help for for activists over there for people like tails ninja or less or something like that it will definitely be of help um there has been some instances in for example the tool project actually um trying to reach out certain southern latin america activists that actually helping them and i know they have been doing lots of good work around that air with tails it's a little bit less because installing tails for people it's a still a little bit scary um and sometimes they don't really know whether it needs to install tails and most of the operative systems that i have seen been used by activists specifically indigenous activists in latin america is windows um so yeah yeah yeah that's not the way but the other operating systems they're a little bit tougher to to use and plus you said people are very very um let's call it illiterate when it comes to digital uh communication i mean they are using uh their phones and just think it vanishes in the air so i think it's really hard to communicate yeah use of those operating systems so i mean there is also one i mean digital rights yes that's also part of the of the topic somebody is uh wondering if there are uh hints that manipulations were done on the level of elections do you know anything about that yes specifically in brazil that's a really interesting cryptography by the name of dia guarana who actually made a huge research about the brazilian elections um in terms of cove it actually this is an interesting really interesting question because some governments have actually been proposing to do digital um ways to actually elect uh presidents uh to this day they haven't come with a good proposal in terms of physical elections there's always manipulation in latin america in my country in a quarter there has been lots of its scandals of actually physical manipulation of elections when they create uh for forge uh ballots when they toss out some like big containers of ballots that they don't want so even in the physical sense there's still lots of attacks around that okay so fia again lots of thanks for this interesting hour with you uh in the name of the audience in my name which is grafias uh they can actually reach you through your contacts now back to the studio okay thank you very much bye um
Oftentimes, we read about the state of digital rights on the Global North and their challenges, but we hear little from the Global South. What is, then, the state of digital rights on the Global South and, specially, in Latin America? Are those rights threatened in the same way that they are in the Global North? Do those rights apply in the same way? On this talk, we will explore these questions. We will touch upon the history of digital censorship in Latin America, how it has evolved, and how surveillance in the region is increasing. Perhaps one of the biggest challenges that we have as a community is the fact that we hear little from the perspectives from other regions of the world that are often consuming the technology and ideas produced from the Global North. This talk will aim to bring the Latin American perspective to the table: to talk about the state of digital rights in the region, about the challenges facing towards a digital sovereignty, about the state of digital rights from a legal and political perspective, and more.
10.5446/51945 (DOI)
Welcome to everyone. We are now starting with our talk on decentralized identifiers. Decentralized identifiers is basically like your email address in the digital world. You have one unique identification item. Many of us may have a lot of email addresses or specified email addresses, but here we're talking about the one main email address, which is the one you use for communicating with your telecommunication provider, with your hosting provider. So the one unique address you use for all your important stuff, if you do it that way, and that is a unique identifier you could use. And Attila is now talking about different methods on decentralized identifiers, which is corresponding to the talk we just had on self-soverying identity. And we will see each other here in a couple of minutes after the talk for our Q&A. Hello, my name is Attila from Jainstep and I would like to present you the topic of our math thesis, which was about creating a DIT method for the SSI ecosystem. I hope you like it and let's start. The title of this presentation is a DIT method for Blober and describes it pretty well. It is about a DIT method that is part of the Blober project. So let me start with the short agenda. The presentation is split in four parts. At first, I would like to tell you what the motivation behind creating the BBA DIT method was. In the second part, I would like to discuss SSI from a more technical perspective. The third part is about the key characteristics of DITs and the last part is finally about the BBA DIT method. So let me tell you something about the motivation. The motivation behind the Blober DIT method was to make the Blober project SSI ready. Blober is a hobby project in investigating authentication mechanisms based on blockchain. Blober stands for blockchain-based authentication as an open source project. The source code is hosted on GitHub. In its current state, it only uses the hardware blockchain. Further information are available on the website and on GitHub. So to enable Blober tracked in the SSI ecosystem, the BBA DIT method was created. What actually is their self-serve identity model? SSI was first described in the article, The Path to Self-Serve Identity by Christopher Allen. The key idea is to grant an entity full control over its digital identity. To do so, it utilizes blockchains as identity provider for digital identifiers. Even though blockchains are not the only option to store these identifiers, they are the most prominent ones. Another key aspect of SSI is the separation of identifiers and attributes. Identifiers are represented as DIT's so-called decentralized identifier. Attributes, on the other hand, are wrapped in so-called VCs, which stands for verified credentials. An identifier in general identifies an entity, like an email address identifies the person who is in control of the address. Attributes are special aspects of an entity, like a postal address, birth date or credit card number, but also permissions to do something or statements from other entities. The combination of identifier and its associated attributes creates an increasingly accurate image of a digital identity. Let me show you an example how authentication works in SSI. At first, a user safe serenely creates one or more DITs on an SSI-enabled blockchain. If one then wants to authenticate, one represents once DIT to the service, one wants to authenticate too. To check the authenticity of the DIT and to ensure that the user is indeed the controller of the DIT, the service initiates a challenge response workflow. To verify the signed response, the service resolves the DIT and gathers the public key of the signature from the blockchain. With this mechanism, the user is able to save, manage once identifiers and does not need to register to a third-party identity provider. If one then wants to add additional information, like a postal address, one forwards a VC to the service. The key thing is that VCs are not stored on blockchain and can be managed everywhere the user wants, preferable in the user-owned domain. To standardize the authentication authorization workflows, the Trust Over IP Foundation published a general SSI architecture, the so-called Trust Over IP Stack. The Trust Over IP Stack is a four-layered dual stack which separates human trust from technical trust. Human trust is about trust that cannot be achieved by technology. It is about governance topics and questions like how can they be sure that this implementation is doing what it is supposed to do? Or how can they be sure that this provider is trustworthy? In general, it is about certificates and seals from trustworthy entities. Let's focus on the technical part. The technical stack is about technical trust. Technical trust represents trust in the mechanics of cryptography and mathematics. The bottom layer is all about identifiers and public keys. It serves as the root of trust in which DITs are being registered and managed. The second layer is about connection, connecting SSI agents in a secure way. Agents are the pieces of software that represents entities in the SSI ecosystem. The DIT-COM protocol, initially developed by the Hyperledger Areas project, is a peer-to-peer protocol and acts as the carrier protocol to achieve secure and reliable connection with the SSI ecosystem. The third layer is all about VCs. This is where the so-called VC Trust triangle consisting of the VC issuer, VC holder, and the VC Verifier lives. The VC issuer is the entity that creates VCs over the VC holder and signs them with its DIT. The VC holder is the subject that the VC is about. He manages his VC self-surrender and presents them to the VC Verifier whenever he wants. The VC Verifier is then able to verify the presented VC without the need to create a technical connection to the VC issuer. He only needs to know the DIT of the VC issuer. Of course, to accept the presented VC, the Verifier needs to trust the VC issuer. As an example, a bank as the VC issuer issues a VC about the credit trustworthiness of the VC holder. The VC holder then goes to a car dealer to buy a car and presents the VC to prove that he can afford the car. The car dealer proves and accepts the VC and the VC holder buys the car and drives home with his new car. The top layer is about the ecosystem. This is where standards of VCs, schemas, or schemes come into play. Every entity following and accepting a specific VC schema is part of a specific ecosystem. So let's focus on the bottom layer and have a look at what the DIT is actually about. A DIT is a Resolvable Decentralized Identifier. It is backed by a verifiable data registry, which in most cases is a blockchain. A DIT identifies a DIT subject, which can be of any kind, person, process, company, account, etc. A DIT is controlled by a DIT controller, which in many cases is the same entity as a DIT subject. Only the DIT controller is able to modify the DIT document. Every DIT is resolvable into a cryptographically linked DIT document. A DIT document contains public, accessible information, like public keys and service endpoints. The DIT resolver is responsible to resolve a DIT into its corresponding DIT document, similar to a browser that resolves an HTTP link into an HTML document. A DIT is always generated by a DIT method. A DIT also follows a generic DIT schema. The schema always starts with the prefix DIT, followed by a DIT method identifier, followed by a DIT method specific identifier. The Blober method schema, for example, looks like this. It contains the prefix DIT, followed by the Blober DIT method identifier, which is BBA, followed by a Blober DIT method specific string. Let me show you how a DIT document looks like before we have a closer look into the DIT methods. A DIT document may look like this. It always contains a link to a schema that provides information on how to interpret the DIT document fields and the DIT it is linked to. In most cases, it also contains public keys for specific context and service endpoints for further interactions. So let's talk about DIT methods. Avery DIT method follows its own specification, which describes how to implement a specific DIT schema to function with a specific verifiable data registry. Each specification has to fulfill some basic requirements, including the following three key requirements. It must be adequately detailed to be implemented independently so that a third party is able to implement a resolver based on this specification only. It must include a method specific DIT schema, and it must also include the method specific CRUD operations, which describe how a DIT can be created and read or resolved, how the DIT document and the DIT controller can be updated, and how the DIT can be deactivated. So let me finally present you the DPA DIT method. Instead of showing you the whole specification, I would like to present you some key characteristics. The BLOBA DIT or BBA DIT utilizes the ADO blockchain as its verifiable data registry. It uses two transaction types of the ECS child chain. ADO is a multi-chain architecture in which child chains are secured by the ADO parent chain. The first transaction type is called account property and is used to manage DITs and DIT controllers. An account property transaction lets you attach arbitrary data to a blockchain account and form of a property key value pair. The second transaction type used to store DIT document templates is called data cloud. With this transaction type, it is possible to store bigger chunks of data within the blockchain. A DIT document template is a DIT document without a DIT, filling the corresponding DIT into the DIT document as part of the resolution process. A DIT controller is always an ADO account. DIT controller and DIT document management are split. This makes the BBA DIT method capable of extending the DIT document handling and to enable additional storage options. It is for example possible to add IPFS support so that the document template is stored on IPFS and the IPFS address is then linked to the DIT. An ADO account as a DIT controller can also control multiple DITs. So let me show you how the BBA DIT schema looks like. The BBA DIT schema starts with the prefix DIT followed by the BBA string to indicate that it is a Blober DIT. The BBA method specific identifier itself consists of two sub-strings. The first part indicates the network type and can be M for mainnet or T for testnet. It can also be omitted which then indicates the mainnet DIT. The second part consists of the transaction hash of the first account property transaction related to the DIT. So how can this information be used to resolve the DIT document? The basic workflow to resolve the BBA DIT is the following. The first part is to gather the account property transaction identified by the transaction hash included in the DIT. This transaction provides the following information. The property name contains an internal DIT identifier which is used to track the DIT within the Ado blockchain. The property value contains encoded information about the DIT following this schema. The first three characters indicate the BBA DIT method version. Every data field is separated by the pipe character. The next information indicates the state of the DIT. It can be active, represented by an A, inactive, represented by an I, or deprecated, represented by a D. In case of deprecation, which indicates that the DIT is no longer controlled by the Ado account this account property belongs to, the redirect account data field holds the new DIT controller account. Otherwise, it is filled with zeros as its default value. The storage type field indicates the storage method that was used to store the DIT document template. C stands for Ado's cloud storage. The last data field is the DIT document template reference. It is filled with the pointer that can be used to retrieve the DIT document template. In case of the Ado cloud data method, it contains the transaction hash of the data cloud transaction with the template is stored. After retrieving the template, the only missing step is to include the DIT into the template and to return the DIT document. So, enough theory, let's see the reaction. There is a web page available to easily play with the BVA DIT method operations, which is called PUBCO and stands for PAP UI for BVA CRUD operations. So, yeah, let's create it. We will create a DIT on the Ado and the testnet network and with the DIT document key in form of elliptic curve, which is used for authentication. And we will also create a dummy service with this kind of information here. So, let's create it. And that's it. We've now created a DIT on the Ado blockchain and we can see the information here, which is the DIT document we just created and the key for the DIT document key. You can also save the DIT and download the information here. So, yeah, you can see now the transactions issue to create the DIT. This is the official web UI from Ado. And if we now, for example, copy the transaction hash and search for it. We'll see the DIT information here. And as a second step, we can copy this transaction hash and find the data cloud transaction here and the DIT document here. So, this works. Cool. We can also resolve this DIT with the DIT resolver here. So, let's copy the completed input in this field. Now, we get this information as well in a more comfortable way. And the BBA DIT method is also resolvable by the Unir resolver. So, you can probably see, yep, there it is. The Unir resolver is a project from the decentralized identity foundation. And is there to provide one unique resolver for a lot of, best case all, possible DIT methods. Yeah. That's it. Cool. Cool. I hope you liked the presentation and got a better understanding of what SSI is about and how the BBA DIT method works. If you're interested, check out the BBA DIT method repo where you can find the complete specification and everything related to the BBA DIT method in SSI. And also have a look at our Chainstab website. Thanks for your attention. So, we have a chance to answer them here together with our speaker. Our signal angel, Autanasa is waiting for you at the moment to give us all your questions over. Attila, you were just talking about the decentralized identifiers. So, what is the special thing about the identifier you use here compared to others? Yeah. I think the only special thing is that I've created it for the other blockchain, which hadn't decentralized identifier before. So, no hardware is one more option for using a survival data registry in the SSI ecosystem. Yeah. And all right. So, we are currently still looking at the pad from the signal angel. At the moment, there are no questions which came in for now. So, if you just have everything so clear that no one has any question to ask for that, I can thank you in this moment very much for your talk. Thanks for the great time. And when I don't see anyone coming up here, I would say we close this talk for now. And I wish you lots of fun in the 2D world to run around a bit. Maybe we find you in the one or other hallway. And great to see you. Also, on a physical event, if we can ever have it again. Yeah. Thanks for having me and hope we'll see you again. Bye, Attila. Bye.
In this talk, we will present the bba DID method and how it fits into the Trust over Ip Stack (SSI Stack)
10.5446/51948 (DOI)
Hello to this first talk at Kau's West today on the third day of RC3. I'm happy to welcome Robert, also known as at.de. He's a string theorist from Munich and it's his fourth talk at C3. He's going to talk about infinities. Hopefully, somewhat an accessible way for non physicists as well. And yeah, the talk is going to be pre recorded. It's going to last about 45 minutes and we're happy for you to ask questions as usual on IRC and the channel is hash RC3 minus CWTV or on Twitter or mastered on same hashtag. Yes, so if you have any questions, ask them there and I think we can start the talk. Hello, welcome to this presentation about the really big. I want to take you to infinity today and look at various aspects and I will show you that there are different infinities. And I hope there is something for everybody independent of what is your previous knowledge of mathematics or physics or even philosophy. So infinity confronting the really big. What are we going to talk about? The plan today is we're going to meet a couple of people. The first will be the ancient Greeks because they already thought about infinity. Then I will tell you what mathematicians know about how to deal with infinity. We will see how games come into the story. And then there will be final chapter where I discuss physics and infinity, which is my home turf. I teach at university in Munich theoretical physics. So I guess everybody has already thought about infinity and this is a really big number. Well, is it a number? Well, I think everybody has contemplated that. But the problem with this number is that it's hard to compute with the number because if you add one to infinity, it seems it's still infinite. So you have an equation that says infinity is the same as one plus infinity and then you can cancel infinity on both sides. Then you get zero equals one and every mathematician's head explodes. So you have to be a little bit more subtle than this. And today I want to show you how to be subtle about infinity. So as I said, already the Greeks considered infinity. A famous one is Aristotle in his book Physics. He discussed an old riddle and that is that of Achilles. Achilles was supposed to be probably the fastest runner in ancient Greek and he was running a race with the turtle. Obviously, Achilles runs much faster than a turtle. Let's say he runs 10 times as fast as the turtle. So they give the turtle a head start. Let's say they give the turtle a 10 meter head start and then the race starts. And of course Achilles runs really fast. But the question is, can he overtake the turtle? And of course, he runs the first 10 meters very quickly. But in the time that he runs 10 meters, the turtle has run itself for a meter. So he hasn't caught up. So he runs the other meter. The turtle runs another 10 centimeters. Still, Aristotle, sorry, still Achilles has not overtaken the turtle. And it goes on and on and on. And as Aristotle asked the question, does he actually ever overtake the turtle? How can this be? There's an infinite number of steps. So you add something in every step. Every time you think Aristotle, you think Achilles catches up with the turtle, it has gone further. And he wondered, does he ever overtake the turtle? Of course, everybody knows from real life that Achilles will overtake the turtle, but it seems like you're adding an infinite number of terms. So this could be an infinite number. So we encountered our first problem with an naive treatment of infinity. Another question that Aristotle considered was the question of the continuum. Say you have a bar of chocolate and you subdivide the bar of chocolate into two parts. And then you further subdivide the two parts and subdivide the parts and you keep going. You could imagine doing this infinitely often and then you end up with an infinite number of very, very tiny pieces of chocolate. And the question is, how can this be? Is it still, when you've done it infinitely often, is it still chocolate? And you end up with infinitely many parts. So either each part contains zero chocolate, then you have zero times, well, infinity, that seems like you don't have chocolate anymore. Or each tiny bit still has a finite size. And then you have infinity times a finite size, which seems like you have turned your chocolate into an infinite amount of chocolate. And that's absurd as well. And Aristotle wrote pages and pages about this question. How to resolve the problem of the continuum. But of course, also the ancient Greeks knew the answer, at least in the case of chocolate. They invented atoms. They said, at some point, when you try to further subdivide, you cannot do it without hurting the character of the chocolate. There are smallest chocolate particles. And let's call them atoms. And by this line of thought, they invented atoms without knowing anything about protons and electrons and so on. They just came up with atoms from this riddle about infinity. And they said, the infinite divisibility of the chocolate is just an idealization. We will later find that this is still a pretty modern answer when it comes to physics. Mathematicians weren't really happy because they said, well, it's an ad hoc solution. Maybe we can come up with something better. But it took them several centuries to resolve this in a mathematically satisfactory way. And of course, this is the foundation of calculus, which was invented essentially by Newton, and lighteners independently or concurrently. So what does the solution mathematicians have to this question? And here, I show you how mathematicians Bolzano and Weierstrass formalized this. And probably you've seen this, if you have seen any university mathematics, you say a sequence, like here I take the sequence of Aristotle's steps of 10, 11, 11.1, 11.1, 1, converges to a limit. And you write the limit of the sequence a n, when n goes to infinity is a, if you can have a dialogue like the following. So person a, let's call him green, claims the limit is 100 over nine. And there's a second person, let's say, yellow says, I don't believe you, and then says, green says, actually, what's the accuracy that you want to know the answer to? Say, it's called epsilon. What is the accuracy that you're happy with? Then yellow can say any number say one over a million. And then green says, okay, and I can prove you that from the seventh part in my sequence, all further elements of the sequence are closer to 100 over nine than the epsilon you chose. Of course, the seven is a reaction to the one over a million. But if you can show that you have such an answer for every possible choice of yellow's liking for this very small number, then you've shown that the sequence converges or as mathematicians would write this. So for every epsilon, there is a number n, a natural number n such that for all larger natural numbers, the sequence is closer to the supposed limit than your limit epsilon. And the nice thing about phrasing the question of convergence and of limits of infinite sequences like this is that you never ever have to really hit the limit point. The two value of the sequence is not a member of the sequence. And you can even replace the limit with infinity. And then instead of a small epsilon for the accuracy, you say, okay, I want that the sequence is bigger than 42 millions. And then you say, okay, from sequence point so and so, you're always bigger. So if for every bound, you can no matter how large it is, you can say, okay, from a certain point on the sequence is bigger than that bound, you say it converges to infinity. Okay, so this is one notion of infinity. And another one comes from a very simple process that where you probably also first thought about infinity, and that's of counting things. So let's count sheep. One, two. And the question is, what is counting? How do we do this? Can we formalize this in a way that infinity is a valid answer? So let's say we have three sheep, one, two, and three. Then counting them means we come up with a one to one correspondence between each sheep and one of the three numbers. And then we've said we've counted the sheep. So we've counted them there three. Or you could say, maybe I don't want to count them with numbers, I want to make sure that two sets have the same size so I can replace the numbers by fruits and then say, I have as many sheep as I have fruits if there is a one to one mapping between fruits and sheep. And that concept goes under the name of cardinality. And you say by definition, two sets have the same number of elements or the same cardinality. If the elements can be matched in one to one way like we've done with the sheep and the fruits. So for example, the set foobar and bats has the same number of elements as the set one to three. So you would say there are three elements in the set. But it has a different number of elements than the set that just contains our favorite numbers 23 and 42. So if you remove the strawberry, there are fewer fruits than sheep. This sounds terribly trivial. But the nice thing is, you can extend this requirement of having a one to one mapping to an infinite number of elements or infinite sets. And the mathematician who first formalized this is Wozgiakantor. Let's see how this works for infinity. So for example, take the natural numbers one to three and so on, and call their cardinality aleph zero or aleph not. Aleph is a Hebrew letter. And he thought that it's a good idea to come up since we are talking about really big things to access a new alphabet in this case, Hebrew. So aleph not is the cardinality of the natural numbers. And the maybe surprising thing about natural numbers is that you can have a proper subset having the same cardinality. So say for example, from the natural numbers, you remove the one, you only have the set of 234 and so on. Then the set has the same size because you can map one to two, two to three, three to four and so on. And that is the idea of Hilbert's hotel. It's an infinite hotel that's fully booked. But then another nerd arrives and Hilbert says, oh, that's easy. Everybody move up one room. And then the nerd can go into the first room and sleep for the night. Similarly, even the even numbers have the same cardinality. So two, four, six and so on, because you can map them in one to one way. Oh, there's a typo in my slides. So I met one to two, I should map two to four, not to three, three to six and so on. You see, there's a one to one mapping between the natural numbers and the even numbers. And therefore, they are the same numbers of even numbers as natural numbers, even though in some sense, you would probably say that only half as many even numbers, but not in this formalization as cardinality. And because so many sets have the cardinality of the natural numbers, there's a name for those that those are called countable. And you can also see that a set is finite exactly if it does not have the same cardinality than any of its subset. When we removed the strawberry, there were fewer fruit than sheep. But when we removed the one from the natural numbers, it had the same cardinality. And that showed that the natural numbers have an infinite cardinality. And even other sets that look bigger have the same cardinality. For example, the set of pairs, one, one, one, two, one, three, one, four, and so on, and two, one, two, two, and so on, has the same cardinality as the natural numbers because you can lay a path through this table that looks like this that goes diagonally. And if you number each pair by the step in the path in which it is reached, you see that there's one to one mapping, every pair is reached exactly once. And so there are as many pairs as there are natural numbers. And from pairs, by just writing the pairs as fractions, so instead of writing one comma two, you write one half, you can see that there are as many rational numbers, so our fractions as natural numbers. So the rational numbers are still countable. And you can easily generalize this once you have the tuples, you have the pairs, you can generalize this to any tuples. So also the hundred tuples are still countable. And from that follows, for example, the fact that the set of all computer programs of finite length, because they are finite strength, you can enumerate them, because there's a finite alphabet, and they're still countable. So if you're into constructive mathematics where things only exist, if you can give a recipe how to construct them, then your universe of constructable things is also countable because the recipes are countable set, they are made up of finite strings. Okay, now you might have the idea, okay, infinity is the number of elements of the natural numbers. So it seems like every finite set, every infinite set I've shown you so far, I have the cardinality of the natural numbers. But now I want to show you also, this is an argument due to Kanto, that the real numbers are actually more than the natural numbers, you cannot map them in one to one way to the natural numbers. So how does this work? So assume you've come up with a sequence of all real numbers, so you can write them one above the other. So maybe it starts with the first number is 0.314159265 and so on. So for the sake of argument, I've written all these numbers such that they're between zero and one. So once you've enumerated those, all real numbers between zero and one, then you can extend this trivially to all the real numbers. Say the next number is 0.42, the next number is 0.23232323, the next number is 0.123456798 and so on and so on. And this list goes on. So it doesn't stop after six elements, it goes on forever. And eventually, every real number is in this list. So let's make this assumption. Then I claim, or actually Kanto claims, we've missed at least one number, one real number, we can construct a real number that is not in the list and it goes like this. So the claim is the number 0.43354542 and so on is still missing. So how did I come up with this number? And I didn't give you the list of all all the real numbers only of the first six. So how can I know it's missing? Well, this is how I constructed in the first place of the decimal point. I look at the first number, there's a three. So my number that is missing, I put there a four. At the second place, I look at the second number, there was a two, I replace it, I saw I write on a three. Then for another two, I write on another three, for another four, I write on a five, for three, I write on a four and so on. So by doing this, I make sure that the number that I'm writing down on the bottom differs in at least one decimal place from all the numbers in the list. And therefore, this number is unequal to any number in the list. And therefore, the cardinality, let's denote this with RLF one, actually, that's an assumption, but we know that the cardinality of the real numbers is bigger because if we try to measure in one-to-one way, we are missing numbers. I should mention that of course it's not true in constructive mathematics because, as I said, constructions are countable. So the constructive people miss a lot of the real numbers. And there's also a famous hypothesis called the continuum hypothesis, which says that actually the two infinities of the natural numbers and the real numbers are next to each other. There cannot be a set which is bigger than the natural numbers, but smaller than the real numbers. And it's a hypothesis and not a theorem because it can be proven that it cannot be proven or disproven. You can add either the statement or the negative of the statement to your standard set theory and don't produce a contradiction. So the continuum hypothesis is in fact an independent axiom. And if you know a little bit more about mathematics, you can see that what we've proven here with this argument is actually that the cardinality of the power set of a set, so the power set is the set of all subsets of the set, is bigger than the cardinality of the set. And the nice thing about these cardinalities is that you can compute with them, you can calculate with them. For example, you can take the sum of two cardinals, which is just the cardinality of the disjoint union of the two sets, or you can multiply them by taking the cardinality of all pairs of one element from one set and one element from the other set. And you can even take powers. So cardinality of set M to the power of the cardinality of N is simply the cardinality of the set of all functions from N to M. And you can see that in this way you can see the power set of M to be 2 to the M, because for each subset you can have zero map to the elements of the set that are in the subset and one to the ones that are not in the subset. But you cannot go backwards with cardin numbers, you cannot subtract or divide in general. So this was the concept of infinity and we've seen already two types of infinities. So we've seen a laf no a laf one, when we extend the process of counting to infinite sets. So we've generalized one, two, three and so on to infinite sets. There's an alternative approach, which gives you the ordinal numbers where rather than you generalize counting, you generalize positions in sorted lists. So first, second, third and so on. And this leads to the ordinal numbers. So well, what, how does it work? So for this we need our sets to be what's called well ordered. It means there is a relation between two elements, which has to obey three rules. Rule one is that two elements are either less than or one is less than the other or the other way around or they're equal. Also it's transitive. So if x is less than y and y is less than z, then x should also be less than z. And also we require that each non empty subset of m has the least element. So a smallest element, one element such that no element, no other element of that subset is small. And you, for example, the natural numbers with the usual ordering have this property. And from the last property it follows, for example, that there is no infinite decreasing sequence. And then you say that two ordered sets have the same ordinality type if there is one to one mapping that preserves this ordering. So let's see what kind of order numbers do we have. So every, well to do this you can see that actually if you have the set of all ordinals up to some point, the set of all ordinals up to that point can be viewed as the next ordinal. So the set of 0, 1, 2, 3, 4, 41 can be viewed as the number 42. And if you add that to the set, then this set represents 43 and so on. So let's see, let's get started. So 1, 2, 3, 4, of course, ordinal numbers. And if you set all the natural numbers, this ordinal number is usually called omega. But then you can form the set that contains all natural numbers and omega. And that gives you omega plus 1. And once you have omega plus 1, you can also have omega plus 2, omega plus 3, and so on. And then you throw in all these natural numbers, so omega plus any natural number, you make a set of all that ordinals. And that ordinal is usually denoted omega times 2. And then you can still add 1 and 2 and so on. It gives you ordinal and so on. It gives you omega times 3, omega times 4 and so on. And of course, you can keep repeating this. And once you've put in all the ordinal numbers of that type, you get a new ordinal, which is omega squared. And you can add 1 and so on and so on. And then come to omega cubed and so on. And you keep continuing that. And you get even bigger infinities. You get omega to the omega. And you keep repeating that. You get omega to the omega to the omega and so on and so on and so on. Then you come to a number that's called epsilon naught, which is kind of the limit of omega to the omega to the omega to the omega to the omega to the omega, where you this is the set where you put in all the numbers that you can reach in this way. And this is the smallest, we will see this. So this is the smallest ordinal number such that, so this is called alpha, such that omega to the alpha is alpha. Right? So we've generated ordinals and we've seen that every ordinal has a successor. Right? So if an ordinal represents a set, then you simply add that ordinal to the set and you get the successor. And the set of all ordinal up to including that ordinal. But not every ordinal is a successor of something. So for example, omega, the ordinal type of the natural numbers, there is no predecessor such that when you add on one, you get omega. So those ordinals, which are not successes are called limits. So for example, omega times gamma is the gamut limit, the way we've defined it on the previous slide. And omega to the gamma is the gamut ordinal that is not the sum of two ordinals. Right? So for example, omega plus 42 was the sum of two ordinals. And so omega plus a natural number is always the sum of two ordinals. And the first ordinal that is not of this type is omega squared. So it's the second ordinal that's not the sum of two ordinals. And in the same way, epsilon subscript gamma is the gamut ordinal, as I said, for which omega to the alpha is alpha. Okay, we can add ordinals from two well ordered sets, say m and n, you can form the disjoint union and you can order the union by first ordering, by simply putting them next to each other. So considering every element of m as smaller than any element of n, and this defines this makes the disjoint union a totally ordered set. And the ordinality type of this set is m plus n. Note, however, that omega plus omega is not omega, right? Because the first set omega plus omega has two limit points. So omega plus omega would be, you put two copies of the natural numbers next to each other, where all the elements of the second set are considered bigger than the ones of the first set. Then the first natural numbers have a limit point and then come the next second copy and they also have a limit point. So there are two limit points, whereas omega as just the ordinal type of the natural numbers only has a single limit point. So these two ordinals are different. Similarly, three plus omega, which is the set 1, 2, 3, and then you put all the natural numbers. So I mean, the one is a different one, right? The 1, 2, 3 doesn't have a limit point. So this set omega, 3 plus omega has only one limit point. Sorry, 3 plus omega has no limit element because it's either 1, 2, 3 or another natural number. So there are no limit points, whereas the second one, omega plus 3 has first all the natural numbers, then comes omega, then comes omega plus 1, plus 2, plus 3. So these sets are different. So you see that addition of ordinals defined as I've done it is not commutative. The order matters, but it's still associative. And you can show that if the ordinal type of m plus the ordinal type of n is the same as m plus k, then this implies that n equals k. So you can cancel the m on the left, but it doesn't work on the right. And there's also a notion of multiplication where you do the ordering by taking, well, again, you take pairs and you do lexical graphic order, but let's not go into this. Rather, consider games. So, and this connection was discovered by John Horton Conway, who unfortunately died in 2020 following an infection with COVID. And you probably know John Horton Conway because he is an inventor of many things among those, the game of life. But he was a famous mathematician who invented many, many things. And in particular, he found the connection between numbers and games. So how does it work? So we consider deterministic two player games of finite duration. So these are games without randomness, so no die snow, cart drawings. And deterministic also means that every player knows all the possible future moves at any point. So there is no secret component like having cards in your hand, but we're thinking of games like Go or chess, for example. And we're going to set up our games with a convention that a player who cannot move anymore has lost and the other player is one. And usually these players have names, they're called left and right, or L and R. And then a game, or which is a synonym with situation, is simply given by all the situations or games that L or R can move to. So a game acts is a pair of a set of games, X left and X right and X left, all the situations that left can move to, and XR are all the situations that right can move to. So that's the definition of a game. It seems like I've defined a game in terms of set of games, so it seems circular, but I haven't. That is the trick, because there is already a simplest game. The simplest game is the one where no player can make any move, where the sets are empty. Of course, an empty set is a set of games. And let's call this game zero, where there are no moves at all. Now we have one game, and we can say, for example, there's a game in which player L can move to the game zero, and player right has no option. So that's the first game here I call that one, or the game where player right can move to the game that has no further moves. And let's call that minus one. And once I have these games, I can also form the game, for example, where player left can move to the game one, and player right has no choice. Or I can form the game where each player can move to the game with no further options, and that game is called star. So it turns out not all games are numbers, but we have to look at who's winning. So, and winning means we always assume that players try to play with the optimal strategy. So there are two situations, either L starts or R starts. So if L starts, and L can force a win, then we call this game fuzzy to zero. And this is exactly those games. We call it fuzzy to zero, or greater than zero, depending on what happens when R starts. So if R starts and R wins, it's fuzzy to zero. So fuzzy to zero means the player who starts wins the game. Whereas if L always wins, it's called greater than zero, so no matter who starts. And if right wins when L starts, and also wins when right starts, so right always wins the games called less than zero, and the game is called equal to zero, if always the second player wins. Maybe we can illustrate this with a very simple game called Hatton Bush. That's a very simple illustrative game. So a game here consists of these trees that are connected to the grounds, and they come in green and red shape. And each player, when it's their turn, can cut legs of their favorite colors. So green is the color of left, and red is the color of right. So for example, green can cut this thing, or sorry, above here, and then everything that is no longer connected to the ground disappears from the game. So now left moved, removed this bit of green, then right can move, removes this thing, then green moves, removes this last bit here, then right moves by removing here another bit, and then left removes this, and now red cannot move anymore, because there are no red pieces left, and therefore green has one. So remember left started, so and this puts it in one of the four categories. So our games that we've encountered so far also appear here. So the game that just contains of one green bit is the game where green can remove this bit and turn it into the game where there are no further moves and right cannot do anything. Similarly, this thing here, this game left can remove this and turn it into the game one, and right cannot do anything. So this game is the game two, and you realize that you can also move this bit here, because now left can remove this and turn it into one, this would turn it immediately to zero, that would be not a good move for left. So left would always cut here, so this is in fact the same game two. Also, for obvious reason, by exchanging the colors, this gives you the game minus one, and then we can also make this game. So the question, so left can turn it into zero, and right by removing this, it turns it into the game one. So the question is, what is this game? So I've put a question mark here, so let's figure this out. So to do this, I have to show you how to compute with games, and simply we define the addition of two games, x plus y, by saying we put the two games next to each other, and then when it's their turn, each player can decide in which game to make a move, and then do the move in that game, and present this pair of remaining games to the other players. So x plus y is the game, so where x is x left and x right, and y is y left and y right, is the game where x, where left can move either in the in the x game, leaving one of the x l options, and leave the y game intact, or leave the x game intact and make one of his moves in the in the y game, and similarly on the on the right hand side. And this addition is actually inspired by Go rules, where you have independent situations in different parts of the board, and you can decide in which of the two independent situations you move. And you can also negate a game by simply flipping the two sides. So minus x, the game minus x, is where left can move to kind of all the flipped games that right could move before, and similarly on the other side. So now still the question, what is the value of this game here? And I will show you so the following thing. So here is again the table. So let's add another copy to this game. And I also want to add a version of minus one remember this game was minus one. So let's investigate where we are. So let's first see what happens when left moves. So left. So left starts and so it will cut one of these things doesn't matter. So remove this then right, its rights turn. So the cutting this bit is better, because if he doesn't do it, his option will be removed by left smooth. So he cuts there, then left moves here, then right moves here, and left cannot move. So right wins. So we are here, L starts right wins. So we are in this part of the table. So let's set it up again, and see what happens if right starts. So right, as I explained, it's always better to cut one of the upper options. So cut here, right starts, removes this. Now it's left smooth for left, it's better to cut here, because it eliminates this option for right. So, so in our red moves and green moves, and now it's rats turn again. And there's nothing left to move. So left wins. And therefore, we are in this corner of the table. So this game is zero. So we found that this combination, the game that we're looking for plus itself plus minus one, gives zero. And therefore it makes sense to assign this game the value of one half, because twice itself minus one is zero. So in this way of setting up games gives us what I call the serial numbers. And you see immediately that all ordinals are also serial numbers. So remember, ordinals were the ones we could have omega to the omega to the omega and so on, by simply putting the ordinal in as the left moves and giving no moves to the right player. Then in the same construction as we constructed ordinals, we can we construct games here. And there are lots of them. So for example, the move, the game omega is the one where left has the choice to move to any of the natural numbers. And right has no choice. So in Hackenburg, that would look like this infinite tree. Note that this is not an infinite game, because as soon as left moves, for example, by cutting here, it will turn into a finite set of nodes. And therefore, no matter where you start, the game is finite. The length is not bounded, but every game is finished after a finite number of steps. And so we find the ordinals. But the serial numbers that come from the games also contains the reals. So I've shown you one half in a similar way you can generate, in fact, all the rational numbers. And so and then putting all the rational numbers that are less than pi in left moves, and putting all the rational numbers that are greater than pi in right's moves gives you a game, and it will turn out that the value of this game is actually pi. You can multiply and divide by games. I don't show you the formulas. I've shown you the formulas for addition. Multiplication also exists. And the serial numbers contain even more numbers. So for example, because you can divide, you can divide one by omega, that's usually called epsilon. And this is a game where left can move to the empty game, and right can move to all the positive rational numbers. And this is a number that is greater than zero, but smaller than any real number. So this is a third way that infinities can be handled by mathematicians. And by handled means you discover that there's not a single number infinity, but there's actually a large, large set of different infinite values. And I showed you how to compute with them. But also, infinity appears in the real world, say in physics. And there are different ways in which infinities appear. So maybe you heard that in quantum field theory, when you compute scattering between elementary particles as set up by Feynman and his Feynman diagrams, you find that if you do the computation that is encoded in this picture, you get infinite numbers. But it turns out this infinity is not real, because here you're computing with things that are not observable. So how does this infinity come about? So let's say here, here's an electron that flies, say a proton, a positively charged particle that flies next to another positively charged particle. Then there are quantum fluctuations denoted here by these little things, where pairs of particles and antiparticles pop up from the vacuum. And since they have opposite charges, they're attracted in different directions. So they shield the electric field of this particle here. So when this particle flies by, it doesn't see the charge only of this particle, but it sees the charge of this whole thing. And this is slightly less charged than this, because these virtual pairs shield the charge of the core. And therefore, this electron here, or this proton here, perceives a smaller charge than is actually there. And when you set up your calculation, in terms of the charge of the particle in the center, then you find that that will be infinite. But since this charge alone is never observable, because this charge is always surrounded by this cloud of virtual particles and antiparticles, this thing, the charge of this thing doesn't really exist. And if you set up your calculation in terms of the stuff that you can actually observe, say, the charge of this whole cloud here, then you find that you never encounter infinity. So, well, this is a prose way of saying there's a way to compute this Feynman diagram, such that you never encounter infinities, because in the end, in the physically observable things like the scattering amplitude, or the decay rates, you never see infinite numbers. Then you've seen singularities in general relativity, like the singularity inside a black hole. This year's Nobel Prize goes to three physicists, one of them is Roger Penrose. He investigated the nature of these singularities. So here you see him next to a black hole, or actually an artist's impression of a black hole. So here the infinity is, so our understanding of this infinity inside a black hole is that actually the theory of general relativity is not good enough, very close to this very dense mass, close to the singularity. You have to come up with a theory of quantum gravity, like string theory, or some other quantum gravity, and then this actual knowledge of the fundamental theory will replace the infinity. And the fact that you've computed infinity just shows you that your theory is not good enough. You have to come up with a more general theory, like super strings. This is a bit like the Greek example with the chocolate bar, where you say, actually, when you make your chocolate pieces small enough, they, and the idea that everything is chocolate is no longer true. You hit the atoms. So here you would hit the strings. And there's even a third way of encountering infinity. So for example, you can have infinitely big systems. And you can, for example, show that things like phase transitions here between liquid water and ice only exist in infinite systems. So if you have a finite system, so this ice block will weigh many, many tons, but it's finite, then strictly speaking or mathematically speaking, there is no phase transition. There's a continuous transition between liquid water and a big ice block. But pretending that the system is actually infinite and that there are two phases like ice and liquid water is actually a good approximation. So pretending a large system is actually infinite, is an approximation that simplifies your life and shows you that some effects better than the actual thing. So it can, infinity can be a good approximation, is the upshot of this slide. And so it's one of the idealizations that physicists like to do in order to be able to describe things. So here you see the approximation of a spherical cal that you have probably seen in the cartoon before. So you can say it's always stupid, like the spherical cal, and the real thing is not the approximation. But I would argue that you could, in exactly the same manner, argue that real numbers are not relevant for physics because they're an idealization, because you can never measure a real number. Every measurement has a finite accuracy. And within the error bounds, there's always a rational number. So you can never be sure that something, some measurement comes out as a real number. You could never distinguish it from a rational number. So you could say real numbers are a great idea of mathematicians, but then idealization that have no expression in the real world, like spherical cows or infinite icebergs. But of course, you know, every day we compute as if measurements were real numbers. So I would argue it's as good to argue that infinite systems should be considered as measurements like where you measure something where the value is pi. Okay, so that was my brief run through various notions of infinity. I'm, will be here because I'm pre recording this for your questions. And if you come up with questions later, feel free to contact me. My email address is here. And also my Twitter handle. We've seen that infinity is can be tamed. And we've seen that they're vastly different amounts of infinity. And I hope I convinced you that infinity is in fact everywhere. Thank you very much. Thank you so much for that great talk. It reminded me of my first year in university. Well, just in a in a in a less tedious way, I think the way I probably would have wanted it to to be back then. We do have a number of questions. I'll just put them to you. You can if you have any questions and they don't, I mean, maybe the talk was quite technical in some areas. If you have to have something a bit more general that's related to the area, I think that that's good. We don't have too many questions at this time. So just feel free to put them in in ISE. So the first question is, is infinity or can infinity be a good approximation? Or is the statement infinity can be a good approximation of big things related to the law of large numbers? Yes, of course. Also the law of large numbers kind of tells you that the biggest numbers are the simpler things get right. So law for those who don't know what this is, law of large numbers tells you that essentially in a nutshell, if you have a probability distribution or you have many of those and you add them together, no matter what you do, you end up with a Gaussian in the limit of adding infinitely many random variables. So yes, that is an example. So if I add 1000 of those, they look pretty much like a Gaussian, but in detail it will differ in general. So if you want to understand the difference between the Gaussian and the real thing, that can be complicated. But as soon as you go to the infinite limit, it's just a Gaussian. Well, fine print applies. But in general, if it makes sense to sum them up, then they will end up as a Gaussian in very general circumstances. So yes, the infinite limit is simpler than just 1000 or a million. Great. So someone just commented when I said it's less tedious, that it's always less tedious if you don't have to pass an exam about it. There's probably some truth in that. We had some questions about the, maybe we can go back to the slides. Is that possible? Okay, let's try. Well, I'll ask the question anyway. So we had a few questions about the game. And the question is, can't there be multiple distinct situations where no moves are possible? Like you can have a lot of different checkmate positions in chess. Number, say two of, you know, or zero of, you can't move anywhere anymore legally. How, you know, why do we have so many? And in one way, if you look at the game, you can see that there are a lot of different ways in which you can have no moves. And then on the other hand, you say, well, I just represent that by, you know, this number zero. Can you see my screen? Do you, I started the presentation again? Yes, I don't know what's live right now. It's live. Yes. Okay. Great. So here, so it's of course an abstraction, right? So when I talk about games as numbers, then I don't care whether you play on a chess board with wooden pieces or you draw trees on a piece of paper. Of course, the games look very different, right? But the essence of the game is just what are the possible moves? And therefore, if there are no moves, there are no moves. I mean, all the games with no moves are the same games. You can come, of course, from different ways and you can realize this as a chessboard or as this Hackenbush game that I showed you. But from an abstract point of view, that doesn't make a difference. However, it's important to always have to, so for this whole machinery of identifying games with numbers, it's always important that you list both the moves of the left and of the right player. So that's not exactly what you usually do in chess, because in chess, it's either white that moves or black that moves. So you never put two chessboards next to each other and then you decide in which board you move. So when you say the left player, the player whose move it is usually, say it's left, you cannot move, then the games can be different in terms of if it were right moves, what could right do? So in that sense, you can have different games where left cannot move. And I've shown you those. So for example, all the ordinals arise as games where right has no moves at all and left moves to a natural number or to an ordinal number, actually. So you have to consider the game, you have to consider both options for both players. But as soon as structurally the possibilities are the same, then you should consider the games as equivalent, even though they can look very different. So I've shown you this Hackenbush game that's good to formalize very, very simple games with a few moves and also to formalize many of the numbers. But of course, you can have very different looking games where the options are the same. So from an abstract point of view, the game I've shown you here is the game number two. And you can realize two in many different games like Hackenbush or Tic-Tac-Toe or whatever. Yeah, so I think there are these obstruction layers maybe above each other, then you're looking at it from a different perspective. And one is the example of... Yeah. Yes, cool. Then we had another question. Maybe you remember which slide that was and that the question is that the Epsilon... I call two different things Epsilon. Okay. So one thing I call Epsilon with subscript one. That was in the chapter of ordinal numbers where I said this is... So I explained that you can take omega to the omega. So omega is... Remember, omega is the ordinal that measures or that represents the natural numbers. And I can take omega to the omega that I... Hopefully I explained this. And then you take omega to the omega to the omega. And then you can take omega to the omega to the omega to the omega. And you can keep going. And in the ordinals, there's limit to this procedure. And that is called Epsilon one. And the formal description, the formal definition of Epsilon one is... So if you have this infinite sequence of taking omega to the omega to the omega to the omega to the omega, then that number doesn't change if I take omega to that number. Right? Because if I add another omega on the bottom, you get the same number. So it's kind of a fixed point of the operation of taking omega to the alpha to that number. So you could say if that number... If I call that number alpha, then omega to the alpha is alpha. Right? And you can define Epsilon one as the smallest ordinal alpha such that omega to the alpha is alpha. And you can say omega... Sorry, you can say Epsilon two is the second ordinal that has property. And you can actually, in the subject, you can put any ordinal. That was one Epsilon. And in this whole serial number story with the games and the numbers, with the serial numbers, there's also been called Epsilon without a subset. And that is number that I get when I take one divided by omega. So omega is the ordinal, the ordinal type of the natural numbers. And the serial numbers are almost a field and copper of doigt. So you can add and multiply with the real numbers. The only thing that is... When I say it's almost a field is that the serial numbers are actually so many that they're in the set. So in usual set theory, it's too big for a set. But apart from that, so it's only a class. But apart from that, it's a field and you can calculate. And in particular, you can take one divided by omega. Right? That is a number. That is positive. You can show that this number is positive. But you can also show that if I take any rational or real number, any positive rational or real number, that one over omega is smaller than that number. So in some sense, it's infinitesimal. It's like dx when you take the derivative divided by dx or the dx in the integral dx. So it's a number like that. And yes, it's very different. It's very, very, very tiny. It's a very, very, very tiny positive number, whereas the other epsilon one was a really, really big ordinal number. So that reminds me... Completed you doing the things. That reminds me of... There was a way of sort of adding some infinitesimal numbers and doing analysis with that. Do you know what I mean? Yes. So there's this non-standard analysis. Yes, that's right. And this is exactly... You can do this with serial numbers. So yes, that is the same thing. Cool. Yeah. Well, when I say it's the same thing, what I actually mean is it contained in the serial numbers. It's contained in the serial numbers. So the serial numbers can do more than non-standard analysis. Yeah, okay. I always like how there are these connections that pop up all over the place if you start pulling things together. So I'm more of an experimental physicist and then the theoretical physicists come and they say, oh, that's obvious. It's all the same. It's coming from the same place. Yeah. Maybe I should say that because this whole numbers and games thing is kind of a niche thing. So if people want to learn more about this, I only give you a glimpse of this, there are actually great books about this. So one... The definitive guide is a book by John Conway himself. It's called On Numbers and Games. If you Google for a PDF, you'll find a PDF of that. There's another book that actually coined the term serial numbers and that's written by Donald Knuth, the guy that wrote the text system and the textbook series on the theoretical computer science. He has written a book on serial numbers and right now I forgot the title, but that's an even shallower introduction to these serial numbers and that's also highly recommended. Great. I think that's a great end to the talk. I always love book recommendations. So it's been a pleasure to listen to your talk and have you answer the questions. That's also what I see people wrote. Thanks for the talk. Yeah. And I hope you come back for your fifth time at some point, maybe in real life again on stage. Yes. Thank you. That's so much better than doing it in the Ken version. Yes. I think we've got the next talk at... It's 1 p.m. now at 1.30 and it's going to be called... Let me just see. Can someone tell me in the ear? Electromobility. And why you can't work as you had imagined at this time. I hope you tune back in and thank you for listening. Bye. Yeah. Thanks for coming.
Infinitely large quantities can arise in mathematics, physics, game theory and philosophy. Those need to be handled with care if one wants to avoid paradoxes. This is a beginner's introduction to taming inf. This will be a guided tour to a variety of occurrences of infinite quantities when counting, sorting, playing games, philosophising about arrows and turtles and when computing scattering of elementary particles. In each situation, I will explain some tools to control these infinities and how to get sensible answers to sensible questions. We will see really big sets, understand how to add 1 to infinity and how to take infinity to the infinite power, play games that are worth infinite amounts, how to add 1+2+3+ and so on to obtain -1/12 and why it is often easier to pretend something is infinite when it actually is not. Along the way, we will learn from Aristotle, Cantor, Hilbert, Feynman, Conway and what they contributed to this infinitely interesting subject. They audience should be willing to bend their mind but will not need anything but a bit of high-school mathematics.
10.5446/51950 (DOI)
Ju-JU-JAV Hello everyone and welcome back to our Chaos West stream. Our next speaker is Hendrik Heuer. Hendrik Heuer is a researcher at the University of Bremen and in the Institute of Information Management in Bremen and in his talk Raged Against the Machine Learning. He will explain why audits are a useful method to ensure the machine learning systems operate in the interest of the public. Welcome Hendrik. I'm Dr. Hendrik Heuer and welcome to Raged Against the Machine Learning. Auditing YouTube and others. In this talk I will explain why audits are a useful method to ensure that machine learning systems operate in the interest of the public. My goal is to empower civic hackers and I'm going to do that by releasing the scripts that I use to audit YouTube and I'm also going to explain to you how to use these scripts. Why would it be interesting to audit YouTube or other machine learning based curation systems? Well, YouTube has more than 2 billion users per month and 70% of the videos watched on YouTube are recommended by a machine learning based curation system. This is remarkable because every fourth person worldwide relies on YouTube as a news source. That percentage is even higher for younger people. There every third 18 to 24 year old consumes his or her news on YouTube. This means that YouTube's machine learning based system plays an important role in what billions of people watch and how they see the world. Why do we need machine learning in these systems? Well there are 82.2 years of video uploaded to YouTube every day. So that's 500 hours of video uploaded per minute. For a team of human experts it would be impossible to review and categorize this user generated content. Now, YouTube markets its recommender system as a sophisticated algorithm to match each viewer to the videos they are most likely to watch and enjoy. In this talk I will show that popular unrelated content is king. So who am I? My name is Dr. Henrik Hoijer and I'm a researcher at the University of Bremen and the Institute for Information Management Bremen. This talk is based on my doctoral thesis focused on auditing machine learning. And in this talk I would explore audits as a way of making sense of complex and proprietary machine learning systems used by YouTube as well as others. And this is based on a research project I conducted together with Henrik Hoijer, Andreas Breiter and Janis Theocharis. And this talk is based on my doctoral thesis called users and machine learning based curation systems. And using the link you can download the thesis for free at the library of the University of Bremen. As many of you may know, machine learning based curation systems are a special type of artificial intelligence. The definition of AI according to Hansen is that it's an umbrella term for computer systems that are quote able to perform tasks normally requiring human intelligence. And I prefer the term machine learning because it's a bit more precise and also because many of the successes in AI that we've seen in the recent years have been obtained through what is called statistical machine learning. You might even have heard about the term deep learning which is an even smaller subset. So what am I referring to when I say machine learning? It's a certain kind of artificial intelligence that infers decisions from data. Similarly, mid-shell defined machine learning as follows. A computer program is set to learn from experience E with respect to some class of task T and performance measures P. If it's performance at task in T as measured by P improves with experience E. And machine learning enabled many of the recent advances in artificial intelligence. It is used to recognize handwritten digits, to recognize people and objects and images, to translate from one language to another, to drive cars and that's the focus here to recommend postings, photos and videos on platforms like Facebook and YouTube. And in my research I focus on Facebook and YouTube because they are two of the most visited websites worldwide. And they use ML based systems to create the content for billions of users. Now recommender systems like ones you find on Facebook and YouTube have a long history. Famous early examples include Loon from 1958, the information lens by Malone et al, the tapestry email filtering system by Goldberg et al and group lens by Resnick et al. Looking at the research out there you find that Facebook received a lot of attention regarding algorithmic awareness, user beliefs about the system, how its system works and the biases that the system enacts. Meanwhile, especially when I started my research it was comparatively little on YouTube despite its importance and the many people who use YouTube. And what motivated my research was a study by Aslami et al from 2015. They found that 62.5% of Facebook users were not aware of the existence of Facebook's newsfeed algorithm. They also showed that users are upset when posts by close friends or family are not shown and users mistakenly believe that their friends intentionally chose not to show them these posts. Aslami wrote, in the extreme case it may be that whenever software developer in Menlo Park adjusts the parameter someone somewhere wrongly starts to believe themselves to be unloved. And there's a lot of research that has pointed out the political, social and cultural importance of machine learning based curation systems. Zainab Tufekchi wrote in the New York Times that YouTube may be one of the most powerful radicalizing instruments of the 21st century. And challenges like fake news, bias predictions and filter bubbles make an understanding of ML based curation systems an important and timely concern. Journalists and researchers have accused ML based curation systems of enabling the spread of fake news or conspiracy theories in general. And these accusations make sense because such systems can shape users' media consumption and influence directions. I will talk a lot about bias in this talk so I want to operationalize what I understand as bias. In the context of my thesis and also this talk, I operationalized bias as an inclination, prejudice or overrepresentation for or against one person, group, topic, idea or content, especially in a way considered to be unfair. And there are famous examples for bias predictions. Epstein and Robertson for instance found that bias search engine results can shift the voting preference of undecided voters by 20% or more. So considering this prior work, I started to believe that it is important to understand whether users are aware of the ML based systems they are interacting with and whether users understand how such systems work. Otherwise, users might believe that they are presented with an objective reality even though the news they are seeing as the result of a co-production between their actions as a user and a machine learning systems ability to infer their interest. Now consider this example. If we have a recommender system, it can easily lead to a virtuous circle. So you end up watching a video related to human rights and then you learn about the treatment of asylum seekers at the European borders. And that leads you to develop an interest in the decriminalization of civil sea rescue. However, it can also lead to a vicious circle where you watch a video about a crime committed by a foreigner and then you see many videos about crimes committed by foreigners because the system just in first, that's what he's interested in, that's what he likes. So it's just giving you what you like. And you may end up with a distorted view of reality, changed political views and even xenophobia. This poses the question, how does machine learning influence people? And you might have heard about the potential dangers of so-called online radicalization and algorithmic rabbit holes where people end up in this loop that I just described, but they have one topic and then see more and more related to the particular topic. And there was one incident that really motivated me to understand what's going on on YouTube and what kind of recommendation the system is provided. So most of you probably will remember that in Chemnitz, the stabbing of a citizen spawned street demonstrations and rioting. And the New York Times wrote an article called, As Germans Seek News, YouTube Delivers Far Right to Rates. So according to the Times, people who tried to inform themselves on YouTube were shown increasingly radical far right videos about the incident, which allegedly radicalized them and which fueled the protests. And this motivated us to perform audits to see whether YouTube is actually systematically recommending more and more radical content. Why is this important? Well, video recommendations of political topics and news have special requirements, especially in Germany. We have laws that force broadcasters to provide fair and balanced reporting. We have also laws that make sure that minorities are protected. In Germany, one of these laws is the so-called Rundfunkstaatsvertrag, the Interstate Broadcasting Agreement. And it's a law that enforces that broadcasting services report an affair and balanced manner that takes minority views into account. So motivated by this, we performed the 2019 YouTube Chemnitz audit. And we found that YouTube is not pushing users towards politically extreme content by consistently suggesting more extreme videos. YouTube is also not leading users down rabbit hole by zooming in on specific political topics. What we found is that YouTube is pushing increasingly more popular content as measured by the views and likes. The sadness evoked by the videos decreased while the happiness increased. Now let's take one step back, because this is only part of a much larger puzzle. To thoroughly understand radicalization on YouTube and how YouTube influences the behavior of users, research would have to show that YouTube is presenting users with increasingly extreme content, that this extreme content negatively affects users' attitudes, that this affects their intentions, and that this changes their behavior. With this talk, I really only can talk about the first point. That is, whether YouTube is presenting users with increasingly extreme content. But I strongly invite other researchers to look at all these different aspects. And these audits can really be a voice of the voiceless. The dictionary defines the word ordered as a systematic review or assessment of something. And in this talk, I will show you that audits can enable researchers and civic hackers to uncover the potential hidden agendas of social networking sites. Audits are especially interesting because they are immediately meaningful to users as newspaper reports by Smith at all suggest. So unlike explainable AI techniques, which might require a deeper understanding of statistics, these audits can be interpreted by anybody. So why perform audits? Because they enable individuals and society at large to monitor and control the recommendations of machine learning systems. And I found that audits are a very useful way to identify potential biases enacted by these systems. Some forget all distinguish five different kinds of algorithmic audit studies. Code audits, non-invasive user audits, scraping audits, sock puppet audits, and crowdsourced audits. And I'm going to explain each one of them step by step in the following. So with a code audit, you obtain a copy of a relevant algorithm and then you study the instructions in a programming language. And this is challenging since the code is considered valuable intellectual property. And the code is commonly concealed using trade secret protection. Understanding systems through code audits is also challenging because algorithms depend on personal data. That is, they need to be audited with real data to be understood. Machine learning algorithms are also quite trivial. So the data is the most important thing. And to illustrate this, I have the code example here on the right. And that's actually a fully functioning machine learning system that can detect spam. It was coded using the Python library scikit-learn, which makes it quite easy to train machine learning systems hiding a lot of the complexity. And what you can see here is that the things that are specific to the spam filtering use case are just the two things that are highlighted. It's the file of the data. It's the emails.csv, as well as the dimensionality of the data. Now if we would change the data from emails.csv to cars.csv, we could easily turn the system into a car recommendations. We could also swap the file emails.csv for a file called cancer.csv and turn this into a breast cancer detection system. It all goes to show that the algorithm and studying the algorithm is not sufficient and not really helpful for our use case. So we really have to look at the output. The second type of audits that some forget or recognize are the so-called non-invasive user audits. There you ask users questions using a survey format. However, this comes with serious sampling problems because how do you actually reach the users that you want to reach? This also comes with important validity problems. For instance, due to cognitive biases, because people might just remember things wrongly and might not be good about explaining why they did certain things. The third kind of audits are the so-called scraping audits. And there you have a script that interacts with a platform, for instance, by querying a particular URL. And this allows researchers to obtain a large number of relevant data points. A more sophisticated version of these audits are the so-called sock puppet audits. Here a script is really impersonating a user and creating programmatically constructed traffic. And this is what I will be focusing on and I'm going to explain it in a lot more detail in the next step. So the other potential way of performing an audit are the so-called crowd-sourced audits. And there you recruit a large number of users to use a particular platform. So it's quite similar to the sock puppet audits, but it's doing it with real people. However, this is challenging because you need to find a large number of people that can either be done through Amazon Mechanical Turk or through inviting volunteers. But that can be quite challenging. So I performed a so-called sock puppet audit where I wrote a script that is remote controlling a browser and impersonating a real user. Just a reminder of what we were trying to do. We were motivated by the Chemnitz incident and we wanted to know whether YouTube is actually showing increasingly radical far-right videos for a variety of political topics, as the New York Times has claimed. So using a Firefox-based bot that I'm going to release with this talk, we performed 150 random walks that always followed the same procedure. We randomly picked one of nine political topics from Germany. Then we entered the topics in German into the YouTube search bar. Then we randomly picked one of the top 10 search results. And then we saved the video page and watched it for a random number of seconds. Then we randomly chose one of the top 10 video recommendations displayed in the right sidebar next to the video. And then we repeated this 10 times. And we looked at both quantitative metrics, like the number of likes and views, as well as qualitative metrics. And for this qualitative investigation, we performed an in-depth analysis, for which we randomly select three videos per topic and coded three videos per random walk. We coded the initial video, the fifth video, and the tenth video. And this coding was performed by three independent raters, one male to female, all in their 20s to 30s, who did not know about the research question. And they really watched the videos for five minutes or more. And then they assessed how closely related the videos are to political topics. They also rated whether the videos evoked sadness or happiness on an 11-point leakage scale from least zero to most 10. So we simulated a regular web browser by remote controlling a browser. We collected between 12 and 25 random walks per topic. So for each random walk and each topic, we started a new browser instance and cleared all cookies. All random walks were collected in May 2019 with the same laptop on the same network, so the decision to select the fifth and the tenth recommendation for the in-depth analysis was made at the beginning of the study. That is, before reviewing any of the material and before we performed any kind of analysis. The raters reviewed all videos in the same randomized order. We computed Krippendorff's Alpha to understand how strong our interrater agreement is. And we found substantial agreement regarding how similar the videos were to the topics in our investigation. That's at 0.765. And the sadness evoked by the videos, which is at 0.613. We also have moderate agreement for the happiness, which is at 0.441. So you might wonder what are the topics that we chose. So we took nine political topics from a representative telephone poll conducted on behalf of the VDA, the Westdeutsche Rundfunk. You find the topics here on the slide, I won't read them out, but they were what people at the time thought were the most pressing issues. And we used the keyword just like they were in the telephone poll. And our audit revealed that recommendations become significantly more popular measured by views and likes. You can see a steep increase from the initial videos to the recommendations. Note that we operationalized popularity as the number of views and likes. We included both because views are an implicit measure of popularity, while likes are an explicit measure of popularity. Regarding views, it also remains unclear how many seconds a video must be watched before it's counted. We have a table here which provides the median and mean numbers of views and likes. Comparing the initial videos and the fifth recommendations, a substantial increase in views and likes can be observed, especially between the initial videos and the fifth recommendation. While the initial videos have a median of 9500 views, the first recommendations have a median of around 200,000 views. After following a chain of 10 recommendations, the views have a median of almost 300,000 views. The number of likes increases significantly too. The initial videos have a median of 170 likes, while the fifth recommendations have a median of 1,404 likes. We performed two-tailed men-with-new-you tests, which support the finding that the number of views and the likes change between the initial videos and the recommendations. The audits also revealed that recommendations become significantly less related to political topics. The median topic similarity rating of the initial videos was 8. This decreased dramatically to 0.83 after following only five recommendations. The similarity remains very low for the 10th recommendations with a median of 1. Two-tailed men-with-new-you tests indicate that the topics in the videos change between the initial videos and the fifth recommendations and between the initial videos and the 10th recommendations. So, all these results indicate a strong topic drift. We also found that the happiness in the video increased, while the sadness decreased. So, the happiness changes from a median of 0 for the initial videos to a median of 2 for the 5th and 10th recommendations. So, while 75% of the initial videos have a happiness rating between 0 and 2, more than half of the 5th and 10th recommendations have a happiness rating higher than 2. Regarding the sadness evoked by the videos, the trend is opposite. The median ratings in the box plot in the figure move from 1.67 for the initial videos down to 0.0 for the 5th and 0.33 for the 10th recommendations. So, while more than half of the initial videos have a sadness rating higher than 1.67, 75% of the 10th recommendations have a rating smaller than 1. Overall, in contrast to what the New York Times reported, our findings suggest that the dangers of online radicalization may be exaggerated. Now, taking a step back and taking the power back, I want you to understand that scraping audits and sock puppet audits are, in my opinion, the most promising method to investigate complex machine learning systems. Because these audits can be used to identify popularity biases like the one that I showed you, but it can also be used to see whether a system is an acting agenda bias or if a system has a tendency to discriminate against or towards a particular ethnic group. So from your experience, reading the news, you know that controversial political topics require a balanced presentation of all arguments in a way that weighs the pros and cons. However, the audit suggests that YouTube's recommendation system is not suited to help users inform themselves about complex political issues.<|zh|><|translate|><|zh|><|translate|><|zh|><|translate|><|zh|><|transcribe|><|zh|><|transcribe|><|zh|><|transcribe|><|zh|><|transcribe|>So, the popular recommendations that you see here are of course attractive for the majority, and this couldbe motivated by financial incentives that try to optimize the watch time for a broad majority.So, in a way, the audit is a prime example for the recentering of public engagement around thecomments of social media.One important limitation of my approach is that we cannot rule out that a rabbit hole effect exists for aand we also wrote a paper on this in more detail where we examine how middle-aged users without a background in technology think YouTube works. mostraand what we use as leaving this group and this is what itsBusy mówi policy is connected to a lot of negative beliefs where people think that YouTube is actually selling the recommendations and they think that YouTube has psychological experts that just try to keep them watching and watching. If you want more, read the paper. It's written by Oscar Alvarado, me, Lero van den Abdel, Andreas Breiter and Katrin Weber, and was published at the CSCW conference this year. Now here's my call to action. I want you to collect YouTube search results, video recommendations and advertisements for different topics. And I want you to do this without user accounts and with user accounts. And my goal is to systematically analyze the recommendations by YouTube's machine learning system. But the next step after that would be to design and implement and evaluate algorithmic transparency tools that help users understand and influence their recommendation. And in the following, I will show you the script that I wrote. And I'm also going to point out where it can be adapted to not only study YouTube and to not only study YouTube across countries, languages and topics, but also to study other platforms like Instagram, like TikTok and a variety of more. And the audits, in my opinion, could be a powerful tool to surveil surveillance capitalism. With the audits that I described, it would be possible to investigate how the content is targeted to individual users. So the audits could be used to explore how advertisements are targeting specific users. And this directly relates to the dangers of the so-called surveillance capitalism. Shoshana Zuboff described surveillance capitalism as human experience, which is used as raw material for translation into behavioral data, which are declared as a propriety behavior surplus, fed into advanced manufacturing processes known as so-called machine intelligence and fabricated into prediction products that anticipate what you will do now. Following these political and economic forces, it's vital to investigate how advertisements are targeted to users. So these audits that I presented could be a tool to investigate personalization, as well as the user profiles of contemporary surveillance capitalism. I also believe that a foundation for machine learning-based systems is needed. And in the thesis, I describe in detail two different models that could be used. One is following the German Association for Technical Inspection, the TÜV, the other one is following the German Foundation for Product Testing, the so-called Stiftungwarem. And both approaches could be used to make sure that machine learning systems act in the interest of society at large. TÜV institutions, for instance, evaluate each car in Germany every year to ensure that a car is street legal. The purpose of the German Foundation for Product Testing is to compare goods and services in an unbiased way. So the TÜV ensures that something complies with a certain norm, commonly making binary decisions whether something is permitted or not. The Stiftungwarem test usually develops a catalogue of criteria used to compare different instances of a specific kind of product or service. So an expert consortium defines these criteria for specific products or services and in particular contexts. Now, a foundation for machine learning-based systems could adopt these schema and iteratively develop criteria for the control of ML-based curation systems. Orders could then be used to make sure that the system is not enacting a popularity bias or that the system is not discriminating against ethnic minorities or certain gender identities. And I really hope that this talk will inspire other researchers to examine user's understanding of machine learning-based curation systems or other machine learning systems and to motivate them to design and develop novel ways of explaining and auditing such systems. But until these bigger things are established, it's kind of up to you and me. So here's my call to Civic Actors. Use the script to investigate the recommendations and the ads on YouTube. And here are some ideas you could look at, fake news and pseudo-science related to climate change or the COVID-19 pandemic, vaccination in general, the mule landing conspiracy or the so-called flat earth theory. So in the repository, there are two scripts that I'm providing. One is called crawl YouTube and the other one is called extract data from downloaded videos.py. And they're both Python scripts. So let's consider the first one called crawl YouTube. As I told you, the goal is to remote control a web browser. And we're using the web testing library Selenium for that. Selenium is also available in other programming languages, but I'm using it here via Python. And this is based on a Chrome browser. You can use different browsers. There's also an extension for Firefox and others. And in the script, you have different parameters that you can set. So here's the number of paths to collect per keyword. And I set that to 20. Then the number of search results to consider, which is set to 10. And the number of related videos to consider, which is set to 10. And the number of related videos to visit depth. And that's the number of recommendations we're collecting. And Amy is a bit weird. Here are the different keywords that we're entering into YouTube to download the recommendations. And it's quite easy for you to add your own keywords. So you could just say, and then save the file, and that would be sufficient. I'm going to remove it for now. If you just want to replicate the same approach that I showed you in the paper, then that would be sufficient. So when running the script, we randomized the order of the keywords. And then we have the main loop here, where we select for each of the keywords, we select the number of recommendations that we specified in the parameters. And for that, we start a new Chrome instance and clear all the cookies. That's what we're doing here. And then we're taking the keywords, and we're entering them as a search query to YouTube. We're opening up the link, the URL, and we're waiting a bit. The reason for that is because YouTube is dynamically loading a lot of the videos. So when the web browser finished loading, there's still loading going on in the background where a lot of data is retrieved. And that's what we're waiting for here. And I just basically wait until the browser knows there's an element called comments. And he knows that by the IDs, an element with the ID comments. And then we're preparing a file name, because we can't just save URL to the file system. And if we haven't already visited that website, we're going to write that down. And we're writing the entire source of the website. And then we're collecting the top end recommendations. And then we're selecting a random video from these recommendations. And then we're following the recommendations up to a certain depth. And it's always the same procedure. We'll open the website, we're waiting for a random amount of time, we're waiting until we can see the comments. And then we're saving the path, and we're finding one of the recommendations and selecting one of the recommendations. And it's quite nice because we can just use the CSS classes to find certain elements in the website based on the ID and based on the class. And after that, we're not only saving each of the individual videos that we're visiting, but also the path. So which video led to another? And we're saving that to a file called crawl underscore paths. So if you were to adapt this code, the easiest way would be to add your own keywords. And of course, to change the parameters. But you can easily adapt this code also to visit other websites like Instagram, like Telegram, and then collect data through this mechanism. So the system selected education policies, and it's now downloading the different videos. I'm stopping it here to show you the downloaded videos. And I do that with typing control C. And let's have a look at the source code of one of the videos that we downloaded. And you can see that this is really the whole HTML document, including all the CSS and the JavaScript. And you can find a variety of things in the data. For instance, if we look for the video title, we find the CSS for that video title. But we also find the actual video title and that's how our education system is embarrassing itself. And that's really what we're doing programmatically, right? I mean, we have the HTML and then we use the HTML to extract certain information and also provide a script to help you with that. And that's a script extract data from downloaded videos. So here we're using a Python library called beautiful soup that allows you to pass HTML and to be able to search the HTML efficiently. So what we're doing here is we're looping over all the videos that we downloaded and then we're passing the HTML that we downloaded. So what you can see here is we're selecting the number of views of a video based on the ID info text class view count. So how do I know where the views are? Well, it's quite simple because I just looked at the source code. So if we find a video that we're interested in, like this wonderful talk by David Kriesel, we just look at the CSS selectors by right clicking, clicking inspect and Chrome, but it's the same in all the browsers. So we have a span with the class view count and that's within a diff that's called info text. And based on that, now going back to the source, we're selecting the counter. We're taking the first one because there's usually just one. And we do this for all the different things, for instance, the date on which the video was posted, the name of the channel, the number of subscribers. And as you can also see here, it's a bit more tricky to get the likes and the dislikes, but you can have a look at the code on your own to figure out. It's not really rocket science either. So here are the references of the paper. I mentioned quite a large number of papers. So I'm just quickly going to scroll through them and giving you a chance to stop the video to look at them. And I again invite you to have a look at my doctoral thesis, users and machine learning based curation systems. Thank you very much for your attention. Okay. Thank you for your interesting talk, Hendrik. And now you have the option to tell us some questions. Use the FC3 CWTV hashtag on social media or the ESC chat to do so. And there already are some questions and a wish. And the wish is, can you please provide the link to the slides? Sure, yeah, I can upload them. I just put them in the repository. I think there's a link to the repository in the video. And yeah, I can definitely provide the slides. Excellent. Okay. Then the first question, how would you want a platform like YouTube to make minority views more attractive without also advertising similar small extremist views? Very good question. Yeah, I think my main idea behind the talk and also behind a lot of the other research that I'm doing is to give people more control. And it kind of starts even just with the knowledge that these recommendations are selected by a machine learning system and they're selected with a particular rule. And that's the first step. So everybody knows I'm seeing the recommendations because there's a system that's actually like Amazon trying to find things that are similar to what I've done in the past and just trying to show me stuff that's similar to what I've done in the past. And I think that's understanding is the first and maybe the most important step. But the second step would then be also to give people control over what they're seeing. And that would be to just give them more tools and more configuration settings to decide what recommendations they want to see and in what context they want to see them. And I think the second question with the more extremist as in like far right extremist, I think that's a different issue in a way. That's kind of policy what's uploaded and that's a bit like it's orthogonal to what I'm talking about. Right. I think this is more related to actually making sure that people like them can't just upload things. This is not it shouldn't be on the platform in the first place. So that's not a recommendation system you should per se. But very good question. Thank you. Okay. Thanks. So next question. What do you think about the recent AI made in Germany initiative that aim to trackable responsible AI? To be honest, I don't know much about it. So I really can't comment on it. I think the idea is in like having like responsible AI. I'm all for that. But it's something that's really, really hard to do. And I think a lot of people are working on this actively. But yeah. The more the barrier so very much welcome that but I can't comment on that in particular because I don't know it. Okay. And do you also run audits in which you choose two videos of the same topic before and picking then the rest randomly? I didn't get that fully. What do you mean? Yeah, you talked about how you do the audits. And the question is, did you also run audits in which you choose two videos of the same topic before picking the rest of them randomly? No, I haven't done that yet. But I think it's something that's worth doing. But I think the most important thing that I really want to do in terms of what with the audits is understanding personalization. So me, for instance, I'm not sure when I created my YouTube or Google account, but it's been years like 10 years or something, right? So they really know a lot about me. And I think nobody really understands yet how that influences the recommendations. And I think it would be really interesting if people with their very old accounts, not just accounts they created a week ago, but things they've used for years and years start to do these audits relating to the topics that I presented here, but also related to more urgent topics. For instance, you can just go to the ID Deutschland trend, which every now and then is asking people in Germany what's interesting. It's a representative poll by the ID or the VDR. And you can then use these topics to see what people are or might be Googling or might be looking for on YouTube. Okay. Yeah. While doing that, aren't there any limits? YouTube imposes you on quality? Yeah. I mean, that's the thing. I mean, there's different ways of doing this. And I've been like, I commented on the Smith et al. works, which were using the YouTube API and the YouTube API has very clear limitations on what you can do and what you can't do. But I did them remote controlling a browser. So there are no natural limits. Of course, you will be blocked if you're too eager, let's say. So you should be responsible and you should have delays every now and then and take some time. But there's no technical limit, right? But I mean, be nice and don't overload the service. Okay. Now we have a question at German. I think cookie slushen wird da nicht reichen. Alle Anfragen kommen von der gleichen IP-Adresse beziehungsweise aus dem gleichen IP range. Ebenso ist DNS, ebenso der DNS Request. Ebenso ist der DNS Request relevant. Wohne dies beabsichtigt? Let's just repeat it for all the other people on the English speakers. So the question is, what about cookies and what about that just deleting the cookies is not sufficient because you're coming from the same IP address and there's a lot of things that are like really limiting you here. And it's just one limitations I have to live with. In this particular audit, I did not do that. And I did that in May 2019, quite recently after the Chemnitz incident. But that's also kind of my idea of releasing the script because there's so many things that have an influence on the recommendations, at least have a potential influence, that we really need a lot of people to do audit studies to really get an idea. And in a way, that's why I want other people to be able to do these kinds of things. Because I wholeheartedly agree with the comment and I think that the IP makes a difference. I mean, I now kind of scientifically said, okay, these are the limitations of our approach. This is what we know. This is what we controlled for. But yeah, probably it has an influence, but we don't know. So I think we just need a lot of people kind of Wikipedia style to go about these problems, about this problem. Okay. Thanks. And there are a lot of questions. How did this, like, did you, or how YouTube acts in doing this? And maybe did you run into any counter measurements against automation on YouTube? No, actually not. I think that's in a way why this is such a nice hack in that, like, we just use Selenium and a lot of people use Selenium as part of their web testing. It's a very, very widely used tool, right? It can be part of your continuous integration workflow, just making sure that certain things about your websites work. And it's quite hard to block in a way because it's an actual Firefox. It's not just trying to be Firefox. It is Firefox. And of course, you can, like, if you look at the behavior of the user, then it's quite artificial in a way. And you probably can detect it. But when we did this, we didn't see anything. I mean, you definitely need to look at the legal situation and make sure that you don't decomply and obey all the laws. But like, for this academic purpose that we've been doing this, we didn't see any problems. Next question. YouTube might be using several different algorithms and keep changing their tech. How would you research at RESTAS? Again, this is limited in a way. I mean, we did this at this particular time and with this particular purpose. But that's also why I'm also open sourcing it, right? I want other people to look at this. And it's known that there's a lot of A-B tests and there's probably dozens of versions of YouTube running at the same time, targeting different people. But in a way that's just showing how important this kind of research is. Because again, like, there's billions of people using this, 70% of the videos recommended by the algorithms, and we don't know shit about it to be blunt in a way. So yeah, I think just take the script and have many people do this. Yeah, yeah. Thank you. Yeah, there are, oh, there's a new question. And why is crawling with Chrome no problem with normally surfing with a Tor browser is constantly blocked? Is Tor blocked? Or what's the question? Clinging with a Chrome browser is no problem, but using Tor browser is most of the time problem because there are captures and verify and constantly blocked. I have no idea, to be honest. I know that you can use Selenium with Tor. And I also know that this can be interesting because you get different endpoints, of course. So that can be quite useful. But what actually YouTube is doing to prevent people from Tor and why, I don't know. Then there are some people in the ERC telling you that it's a really nice talk. Thank you very much. Thank you very much for that. You're welcome. Yeah, are there any questions for this Q&A session left? You can go to the ERC chat. It's linked in the streaming media CCD page or do it on social media. I put the slides and the repository and people can find it. Especially if you, let's say media science students and the like use these scripts and do exciting stuff. Because there's a lot of interesting research on Twitter, for instance, because it's quite easy to do this kind of research. And I really hope, in a way, with releasing the scripts that make researching YouTube a bit easier and hopefully also Instagram and Telegram. Because in principle, it's really the same. You just toss in a URL and then look for the HTML elements like I showed. You need to know HTML a bit and Python a bit. You can only do so much in a talk, right? New question appeared. Doesn't randomly choosing a recommended video has its own bias? This may prevent the ML algo from learning a user preference and following a rabbit hole? Yeah, very good question. And definitely it has an effect. I reflect on that in the thesis. In a way, it's a conscious decision, right? You can do a lot of different things. That's just the one way I try to, which I thought is the most interesting way that I can do now. But it definitely has an effect. And it definitely is also quite different from a human being on the website, right? But again, that's why I think this is only one small piece of the puzzle. And we definitely need more than that. Another question. Did you investigate on the effect of having a German IP address or the browser language in German? I had a university IP address. The browser was actually said to English. My whole system was said to English at the time. I'm acknowledging it again. In a way, this is like this puzzle piece which has these different settings. But we don't know what different it would make. Do we get different recommendations? And that's about it. Okay. Then thank you very much for your awesome talk. Yeah. Thank you, Elfriedi. And have a beautiful L3.3. Same to you. And happy hacking to everybody. And let's hope we can do a bit of surveillance of surveillance capitalism. Okay. Bye-bye. Bye-bye. Bye.
This talk explains why audits are a useful method to ensure that machine learning systems operate in the interest of the public. Scripts to perform such audits are released and explained to empower civic hackers. The large majority of videos watched by YouTube's two billion monthly users is selected by a machine learning (ML) system. So far, little is known about why a particular video is recommended by the system. This is problematic since research suggests that YouTube's recommendation system is enacting important biases, e.g. preferring popular content or spreading fake news and disinformation. At the same time, more and more platforms like Spotify, Netflix, or TikTok are employing such systems. This talk shows how audits can be used to take the power back and to ensure that ML-based systems act in the interest of the public. Audits are a ‘systematic review or assessment of something’ (Oxford Dictionaries). The talk demonstrates how a bot can be used to collect recommendations and how these recommendations can be analyzed to identify systematic biases. For this, a sock puppet audit conducted in the aftermath of the 2018 Chemnitz protests for political topics in Germany is used as an example. The talk argues that YouTube's recommendation system has become an important broadcaster on its own. By German law, this would require the system to give important political, ideological, and social groups adequate opportunity to express themselves in the broadcasted program of the service. The preliminary results presented in the talk indicate that this may not be the case. YouTube's ML-based system is recommending increasingly popular but topically unrelated videos. The talk releases a set of scripts that can be used to audit YouTube and other platforms. The talk also outlines a research agenda for civic hackers to monitor recommendations, encouraging them to use audits as a method to examine media bias. The talk motivates the audience to organize crowdsourced and collaborative audits.
10.5446/51953 (DOI)
เชックス Welcome back at the Cars West Tiv Hall stage, second day. Hopefully you didn't lose your sense of time already. That seems to be happening to people at the Congress quite often. But if you haven't and you found your martyr and a good place to sit and watch, we'll have the next talk for you held by Julian Fittkau. It's called The Elephant in the Background, Empowering Users Against Browser Fingerprinting. Most of you probably know cookies and that cookies are a slightly misused tool by the advertising industry, violating your privacy. There are many tools against cookies and it's quite easy to defend against that with some tools. But of course the advertisement industry is resourceful as ever and they have their new tools called Browser Fingerprinting. And that's a little bit harder to do. And Julian is from a group of four people that developed this tool called FPMON that will show you when you are being tracked. You can check what's happening there. This is a pretty recorded talk that Julian held and I would say let's watch what Julian has to show to us. Community and welcome to our talk about The Elephant in the Background, a quantitative approach to empower users against browser fingerprinting. My name is Julian and I'm the project lead of the research project that I would like to present you in the next half an hour. Before we start, I would like to introduce you to my team that has worked on this project for almost one year now. In the beginning, me and Felix have kickstarted the project and later on Sebastian and Kasia have joined our efforts because the workload has grown tremendously over time. During the project, we all have been associated to the security and telecommunications research group that is set by Professor Seifert. This is actually a very good moment to thank all of these people for your commitment and support that made this project such a great success even in these difficult times. But now let's start our story. Tracking users is an ubiquitous practice in web today. These activities recorded on large scale and analyzed by various actors to create personalized products, forecast future behavior and prevent online fraud. Why so far, HTTP cookies have been a weapon of choice, new and more pervasive techniques such as browser fingerprinting are gaining traction. Browser fingerprinting is very similar to cookies but works quite differently. Instead of just receiving a unique identifier, for a device fingerprint, we need to collect tiny pieces of device specific data that can uniquely identify a user altogether. Similar to cookies, fingerprinting does not always mean identification or tracking. It is just a technical process of collecting a lot of device data. The lines between using this data for benign operations and tracking are very blurry, hence in most cases we can only speculate on how this data is used. There are many reasonable applications for fingerprinting such as content tailoring to personalize your browsing experience or to prevent malicious behavior for security reasons. But it can also be used to analyze and identify users. In this talk, we want to describe how users can be empowered against browser fingerprinting by showing them when, how and who is analyzing them. To this end, we conduct a systematic analysis of various browser fingerprinting tools. Based on this analysis, we introduce you to FPmon, a light-white and comprehensive detection tool that measures and rates JavaScript fingerprinting activity on any given website and in real time. With FPmon, we will evaluate the Alexa 10K most popular websites to study the pervasiveness of JavaScript fingerprinting, review the latest fingerprinting countermeasures, and identify the major networks that foster the use of fingerprinting. Before we go deeper into this, let's first of all get everybody on the same page and let us understand how browser fingerprinting really works. So let's start with a quick example on how fingerprinting can be done on your local device. This process can be described in three steps. First of all, we will query the device data via JavaScript, which gives us a unified interface to an enormous amount of device-specific data. An easy example can be executed by just calling navigator user agent, navigator languages or navigator connection to get some of the various device-specific values. More advanced techniques will leverage variations in hardware and software to generate a device-specific value. For instance, using the WebGL RP, we can apply a set of textures and ambient lights to a 3D object. By analyzing the generated picture, we will get a slightly different result on every device that can be used to improve the user fingerprint by just another data point. Similar methods have been shown for the HML Canvas element and the Web Audio RP. In the next step, all the collected device data is combined to a comprehensive device profile. At best, this profile is unique and reproducible. In the last step, the device profile is used to calculate a hash value that represents the fingerprint. In the last step, the device profile is used to calculate a hash value that represents the fingerprint. Most of all, this is done for a quick and easy comparison. Now, we want you to show how this fingerprinting process is embedded in the Web. Most typically, there are three parties involved. A Web user, a first-party content provider and a third-party fingerprinting service. First of all, the content provider needs to embed a fingerprinting script into the content of its service. When a user visits this Web page, the browser will download and execute each script included in the loaded page source. As a result, the fingerprinting script will be executed on the user device and starts to collect the device features. Either all the collected data or a simple profile hash is sent to the fingerprinting service. Afterwards, the service provider matches the received identifier against its database of known profiles. If the profile matches, the user is identified or a new profile will be created. In the end, the content provider can access the results of the analysis or receives direct insights, for instance, if a user can be trusted or not. The service provider will be paid by the content provider or monetized its service in some other ways. The first step on our mission to empower users against this practice was to understand and classify the JavaScript functions that are most typically used for fingerprinting. To this end, we have systematically analyzed multiple commercial and public fingerprinting tools that are created by companies like Zift, Iovation, Xeon and Datadome. In addition, we analyzed several open implementations like Fingerprint.js, MIUNIX, BrowserLeaks and the Panopticlic project. Hereby, we obtained a collection of 115 JS functions that are used by those fingerprinting tools. Indeed, not every function is responsible for fingerprinting, but when combined in a specific order, these functions are indicative of fingerprinting activity. In the next step, we classified those 115 functions into 40 different features, where each feature represents an individual vector to fingerprint a user. Some of these features cover functions that read out by screen information, to configure languages or more complex ones, like functions that are used for WebGL and audio fingerprints. To account for the different capabilities of these features, we applied a simple weighting mechanism by labeling each feature with a severity rating. Less critical features have been labeled sensitive, while more problematic ones are labeled aggressive. Clearly, none of the classified features is only related to fingerprinting. More importantly, it's even fundamentally impossible for a user who visits a website to know whether she's fingerprinted or not, unless it is explicitly stated. However, we argued that the combined use of the JavaScript functions is a strong indicator of fingerprinting activity, especially as more aggressive features are being used. When a website uses many of the sensitive and aggressive features in a particular composition and in a very short time, it becomes very likely that the device fingerprint has been created. This idea is the fundamental core of our quantitative fingerprinting model. After studying all existing tools and classifying all the JS features, our next step was to develop a browser extension that can record the JavaScript functions and analyze them based on our quantitative model. The core idea to implement this was to dynamically add an interception mechanism in front of the classified functions, especially before the real web page context is executed. By modifying the JavaScript runtime with code injections, we were able to intercept and record the functions without altering the default runtime behavior. Another major benefit of this approach is browser independence. This means that FPMon can be easily integrated into any up-to-date browser. When using FPMon, the browser extension will inject a script that is executed before any page script. This injected script modifies all the monitor JavaScript functions to lock any function call. While recording each call, we can evaluate the classified features according to our fingerprinting model and hence calculate a fingerprinting score. Based on some well-defined thresholds, we can change the extension icon to be green, yellow or red. This easy-to-understand indicator will show you if the currently measured fingerprinting activity is low, medium or high. In the icon batch text, we can additionally show how many of the fingerprinting features have been called. To get more details about the measurement, you can click the extension icon. But before we get more into this, let's go ahead and see how FPMon works in reality. Now we will see how FPMon works when visiting a website. We load the page and get an immediate feedback on what's happening in the background. The scripts in the background are executed so quickly that the website is not fully visible to the user, but the device features are already extracted. When clicking the extension icon, we can see more details about the process that just happened in the blink of an eye. The FPMon Chrome extension will show you how many of the tracked JavaScript functions have been called, how this relates to our fingerprinting features, meaning how many features have been activated and how many of those features are labeled aggressive. Furthermore, we show a descriptive list of features that are accessed when visiting the website and the top three highest-scoring scripts that are active on the page, which helps to identify the root cause of the fingerprinting activity. While we now understand how FPMon works and how to use it, let's start to browse the web. We will have a short demo to showcase some interesting examples we found while browsing the web with our FPMon browser extension. Before we start, I want you to notice that we don't have any cookies stored, we haven't given any user consent and there's almost no user interaction with the website we will load. First of all, we will visit wasrejournal.com. By just loading the page, 25 out of 40 fingerprinting features will be activated. We go ahead and visit nasdaq.com and 30 out of 40 features will be activated. We load easyjet.com and 22 features will be activated. We load bankofamerica.com and 19 features will be activated. When loading newyorktimes.com, 25 features will be activated. When loading coinbase.com, 25 features will be activated. When loading savethechildren.com, 26 features will be activated. When loading healthcare.gov, 21 features will be activated. Before you start to think that every page uses all of these features, let us check some other examples. When loading google.com, only 12 features will be activated. When loading wikipedia.org, only 7 features will be activated. When loading nasa.gov, 6 features will be activated. When loading the website of the european pallyamant, only 3 features will be activated. By loading torproject.org, just a single feature will be activated. And when loading wikileaks.org, not even a single feature will be activated. So as you can see, there is a wide spectrum of scores through a diverse set of websites. Now what we now need to ask is, what is a good and what is a bad score? So let us draw a baseline to better understand the fingerprinting score. In this table, we put all the previous examples into a sorted list. To this list, we added the panopticlick privacy test, which is a tool that has proven to be able to identify you by just using jsfingerprinting. If we visit panopticlick using our browser extension, 21 out of 40 features will be activated. This relates to a total score of 53%. When we visit similar websites such as fingerprintjs or miunic.org, we reach roughly the same scores of more or less 50%. If we consider this as our baseline, we can define that scores of around and above 50% are somehow concerning. Looking at the examples we have seen previously, there are many pages that score even higher than this baseline. These websites belong to financial institutions, news media, online shopping and even NGOs. We have to ask why so many device data collected when visiting these pages? Do they identify us? Who has access to all this data? Luckily, there are also much more pages with lower scores that drive very similar applications. To improve our understanding of this, let us increase the sample size. To see the bigger picture, we have automated fpmon to browse the 10k most popular websites and record how much fingerprinting is applied against a user by just visiting the landing page for 60 seconds. From our data, we can conclude that around 500 pages don't use any of the monitored features. On the other side of the scale, the highest score has been reached by only 5 websites, for example Breitbart, Forsquare and Politifact.com. They make use of 38 features which relates to a score of 95%. When looking into these statistics, we see that the biggest majority of the websites, almost 57%, use around 7-15 features. The median amount of features is 11 with an absolute deviation of 5.2. Based on this statistic, we more or less define the thresholds for our website rating. A website activity is rated low if the number of features is less or equal to the median feature use. A website is rated medium if it uses more features than the median website uses, but still less than the upper bound of the absolute deviation. Every website scoring above this range is rated high. Like the webmaster tell us, the distribution for sensitive and aggressive features is very different. Hence, we also make this distinction when rating a website. Based on our rating scheme, we labeled 53% of the tanker most popular websites to be low, 28% to be medium and 90% to be high. We also found 10% of the websites to score the same or worse than our baseline, such as the Panoptic Click project. In another evaluation, we had a closer look on how many websites use each of the monitored features. We see various features are used by many websites, regardless of how they are rated. But if we look into the right half of the chart, we also see that almost half of the features appear to never be used by websites that score medium or low. It seems that those features are used against the interest of the user and never serve a benign purpose. For these cases, we have to ask how important are these features? What website really needs to know? CPU, audio, memory, connection and battery details in such a short time and when just visiting their landing page? When comparing with previous research, we can also see that the utilization of these techniques has grown tremendously. In 6 to 7 years, there is 10 times more font fingerprinting going on and 3 times more canvas fingerprinting. We think this development is quite concerning and this is a good point to start thinking about what really needs to be accessible by a website. In another experiment with FP1, we wanted to figure out who is profiting from this technology. Therefore, we started to analyze all scripts that we discovered when calling the 10K most popular websites. First of all, we noticed that the majority of aggressive fingerprinting attempts is only caused by less than 1% of the scripts. When analyzing each of the scripts, we were able to identify some of the major networks that foster the use of fingerprinting. To do this, we classified each script based on its host name, file name, its fingerprinting score and a fingerprinting signature. The signature is basically a list of all the features accessed in their particular order. By combining all these properties, we found more than 100 networks of different sizes. None of these networks that reach a high score is present on a sufficiently large number of pages to reliably track users across the internet. However, some organizations are on the edge of becoming a real threat to internet users. Their network size might be comparatively small at the moment, but they include high profile pages and hence can analyze millions of users every day. The most harmful networks we discovered were created by mode that is now part of Oracle and a company called Zift. They both reach a score of 50% and above and are present on roughly 50 websites. If you visit one of their clients' websites, their script will collect your device profile and send it to their own network. Some of the affected domains are Breitbart, Wall Street Journal, New York Post, Udemy, Patreon, Kickstarter, Flickr and so on. However, while they are the most threatening ones, smaller networks are following their lead, such as Datadome or Adform. Another interesting one is the LalaPing network, a network of 17 streaming websites that share a common fingerprinting signature with a score of 88%. In the bottom part of the table, we also find less harmful networks, such as Akamai. The fingerprinting script reach a concerning score, but at least the collected data is only sent to the content provider and not to third-party service. Today we know that the data collection is part of their bot detection service. However, we need to ask if harvesting such vast amounts of device data without user consent does justify its purpose. Another example is the network of Google and its subsidiary DoubleClick. Despite their huge network size, they seem to not analyze users based on the monitored fingerprinting features. In our last experiment with Appemon, we have evaluated how well a user is protected by some of the most popular anti-tracking tools. For this test, we evaluated EFF's Privacy Badger, DuckDuckGoes Privacy Essentials, Firefox with its strict privacy mode and the Apple Safari browser against a set of 20 test cases. Unfortunately, most of these tools do not provide sufficient protection with respect to browser fingerprinting, but are hopefully still useful against other forms of tracking. The best solution that we found to work in most of our test cases was the Apple Safari browser. The underlying reason becomes clear if we look at how these solutions are implemented. While the plug-ins and Firefox work only based on blacklisting of well-known fingerprinting services, Apple has integrated a new and very different approach based on unification and herd immunity. Apple Safari browser only supports a very simple and unified system configuration, which makes most Apple devices look identical. This reduces the capability of fingerprinters to identify a single device without breaking web functionality. In conclusion, Apple has simply implemented what we have seen earlier in our feature analysis. There are too many features that don't have any value for the web user and are maybe even used exclusively against their interest. Hence, the best solution against the growing threat of browser fingerprinting is to unify and reduce the amount of data that is collectible via JavaScript. To conclude our findings, we have seen that fingerprinting is present on many websites with sensitive contents, such as health insurance, financial institutions, news media and NGOs. In many cases, the number of collected device data is so extensive that user identification might be easily possible when comparing this behavior to research that have been done by projects like PanoptiClick. Furthermore, fingerprinting is very stealthy and concealed. Many of the websites collect sprawling amounts of user data and send it away within milliseconds, often before the page is even fully visible to the user. In our experiments, no user interaction takes place and no concept is given. We think this practice of concealed data collection clearly supports privacy regulations such as GDPR or CCPA. Based on our experiments, we want to question the capacity in which owners know the practices and true power of the third party services embedded on their websites. For some of these networks, fingerprinting seems to be part of tools that are used by website administrators to maintain their services, for instance for body detection, analytics or security. Many fingerprinting scripts seem to be part of specific online services that ultimately collect vast amounts of user data. For example, archive.org has almost no fingerprinting activity, 7%, but their donation page scores 90% because of a single third party fingerprinting script. On the other side, the New York Times scores 60% across their website, but deliberately disables all data collection on their dedicated wisdom blowing page, 0%. These are just two examples of two popular websites that should underline that some people might actively participate on this technology, while others might just be victims. Last but not least, we have shown that most anti-tracking tools cannot sufficiently protect users. To really protect users, we need to simplify and unify the JavaScript interface and not extend it with just another useless feature. If you want to see more technical details and more results of our work, I want to invite you to read our paper. We have published our paper and the FPMont browser extension on our website that you will find on fpmont.github.io. For any further questions, you can participate in the following Q&A or just contact us via mail. Thanks a lot for your attention and stay healthy. Well, that was a super interesting talk and quite crazy actually that this fingerprinting is so popular, sadly, and interesting to see some names in there. And luckily, we have Julian here as well to the pre-recording to answer some questions. So if you have any questions about what you've seen there about the tool or in particular about fingerprinting in general, maybe, you can still send us questions either using the social... using the hashtag RC3CWTV or in our ISE channel, it is RC3-CWTV on the Hackint network. So feel free to join in there and our signal angel will pick up those questions for us. And I do see that we do have some questions in the chat already. So I don't know, I think we can start Julian. First question, probably very interesting to many, many. What about you, Matrix? Is it possible to block fingerprints with this extension or is fpmont like a completely different beast? So I don't know actually about this Matrix tool and I also see that there are many people asking for other extensions. So we only have tried those four tools because we tried to look at the most popular ones, let's say, or the ones we did know about. So this U-Metrics, it looks very manual. Like you have to... like a firewall, you have to configure it quite in a quite detailed way. And it might be a solution for people that are really specialized into this, but I think it's not a solution for my mother, for example. And that's what we want to look at. I think your tool has a quite different approach here where U-Metrics is like you choose what you need to block and your tool like shows in the first place what is even being tracked and what is happening there. It's like two different approaches, I think. In the moment we are only monitoring what happens. So we have thought about defense mechanisms, but when we have seen that there are solutions that don't work in the moment, we didn't want to just publish another one that doesn't work in the end. So it was actually really to work to understand what's happening, what's going on, where do I get fingerprinted and what the network's behind it. Yeah, maybe with this data now that you know what exactly is happening, you can target much more precisely instead of doing something that might or might not work, you don't know. Yeah, that's basically the next step we are targeting at in the moment to get some resources and then push this further. Because we have also, as it was mentioned in the talk, we have also found that like... Seems like the connection sounds very reasonable. Could you repeat the last sentence, we had a small connection glitch there. Yeah, so I'm back now, right? Yes, yeah. Just a small thing. Yeah, so as we have seen there are really just a few scripts that do this in the moment, but they are spread across the web and so maybe blocking something might be still a good solution, but so far it seems they haven't targeted the right domains or something like this. Okay, do you have any Firefox support plan because that's a good question I also in the background try to install it, but I'm a Firefox user doesn't seem like it works there. So for our tests, we had a Firefox plugin, but we haven't published it in a moment because we cannot really manage to take care of this. We have a strong solution is also used with Chromium, which is the platform we use actually, and I can emphasize to use that. But it should basically also work in Firefox, but maybe the UI doesn't work so in this way. For our automated analysis and heads work, but we haven't published it. Maybe we do this in the future. Yeah. I'm not sure. Maybe you can, you know something about this, but how do you think is the effectiveness of you block maybe against such fingerprinting measures? I think you block is also just based on blacklisting and many of those tools might even use the same blacklist by some, which are published by some companies or smaller projects. And from that point, I would bet that it's maybe not that effective and the best solution we have is authentication and simplifying of those interfaces, which is a completely different way to think about this kind of problem. And I think it's also the right direction. Unfortunately, there's a party browser is not available for everyone. But I mean, I think the Firefox guys, they could maybe also take this up this idea from my point of view. Do you maybe know what metrics Apple or Safari is using and why it's so effective? Like I think you've shown in one slide that it's one of the most effective, if not the most effective solution right now against us. Does it work similar or does it just count function calls or evaluate it somehow in the background? No, so they have some, I mean, they have a completely different approach than blacklisting. They use, they simplify and unify the JavaScript in case. So if you call, I was going to ask, hey, what's your usage? We're having some really bad languages right now. They own a very limited. We'll try something, Julian. I'm very sorry, but could you in your, in your webcam share this little icon on the bottom and symbol click on that and we'll disable the webcam so we can save on some bandwidth. Maybe that will help the audio connection. Yes. So this is what. Yeah, exactly. So now we sadly cannot see you anymore, but hopefully we'll have more bandwidth for the audio. Can you repeat the last answer, please? So Apple has chosen a completely different way. They simply find unified the JavaScript interface. And this way you cannot distinguish between different devices, which use Apple's operating system. And that's actually a solution I would like to see everywhere because so the JavaScript engine so far. Let's say useless features or nobody really uses many of those features, which are used for fingerprint. And yeah, that's, that's actually a better way instead of blocking some weird domains because we have also seen scripts that had have used dynamically fragmented and they even used randomized domains to hide their fingerprinting. So blocking is not a valid solution now and it will not be in the future. Yeah. Yeah, it's the same for IP blocking. I guess the walk is currently fighting the same thing, you know, block one IP the next one just pops up. It's really hard to mitigate to make this maybe more visible with the Apple device. Everybody looks a bit the same. So you cannot distinguish between the different users. Yeah. Yeah. You know, if everyone has the same fingerprint, then yes, you get a fingerprint, but it's pretty much useless. It's a good strategy, I think. Yeah. Yeah. Yeah. Did you by chance check out the tour browser? Maybe the tour browser also has some Apple like countermeasures to this fingerprinting. I think there was a lot of headlines a few years ago because some people realized yes, tour will hide where you are coming from, but not how your browser looks like, right? So I don't know is the JavaScript enabled by default on the tour browser? I'm actually not sure, but I think they do have an extension default. Usually that will, yeah, will have something. I'm seeing and shattered someone commented that Firefox has a resist fingerprinting setting. And it seems like the tour browser has this active by default. Do you know how useful that is? So at least for the Firefox, we have tested actually exactly this case. It's called strict privacy mode. And it's not, it's not so it's maybe the best solution from those we have seen, but it's not as good as the one from Apple with unification and third immunity. Yeah. So it works in some cases, but it also didn't work in many other cases. Okay. Yeah, but hopefully better than nothing. And yeah, I'm just hearing from the chat that JavaScript is enabled by default in the tour browser. Yeah, but they have a lot of other tools that try to block stuff and different levels of protection. So hopefully you're sort of safe there. But as we heard it has this resist fingerprinting. Yeah. And in the end, it's again, what we had in the beginning of all Q&A. I don't know. I mean, we as specialists might start to use tour. That's. And. But it's not true for any kind of repeated one more time that was broken up. So the top route is not used by everybody, right? So most people will use one of these default solutions, which you can easily install and so on. Yeah. We have one question from Twitter finally, I think this is the very first question from Twitter that we have at this Congress. So, to explore the use cases for these fingerprinting integrations on commercial sites a little bit further. I think you've mentioned that briefly in the talks somewhere about SIFT and especially Akamai that can use this for like machine learning fraud detection, something maybe for payment processing. Do you have any like analysis of, for example, good or bad fingerprinting? Yeah. So, I mean, bot detection is probably the. And security applications. So in our studies, we also have seen that while you are locking into some services, you are fingerprinting. And that makes sense to some degree. The question is how this data is maybe shared later on or who has access to this or is it used in some other form. And that's actually a fundamental problem here because like with cookies, we can never say is it a benign or malicious use. Does somebody monetize on this data or not? And so on. So that's a complicated part. Yeah, that's probably always like a hard question, you know, data can be used for good and bad and the approach to collect the data might be the same, but you never know what people are going to do with it. Yeah, that's the dualism of technology. Yeah, it's a really tricky thing. And then another question from Fedy verse, I think so somewhere on message on someone asked if using a virtual machine is an effective fingerprinting countermeasure. For some of the methods we know it might be effective. And it's actually also an interesting question if you can, for example, then detect virtualization and so on, which is at least under security considerations and other interesting topic. Yeah, you're probably then be able to classify people that try to evade fingerprinting measures, you know, you feel like you know, you knew user group classification and they're already. Yeah. It's really tricky. And as always, it's again the question how, yeah, it's capable or like how many people do this right to protect themselves. Yeah, I still see people using no ad blockers, no, nothing and I'm always like, how can you even still use the internet with this. I've tried it once and it's unbearable to me. Yeah, totally agree. One more question. Where can this FPMON extension be found? How can you install it if you're interested in doing so. So at the moment we have a domain FPMON.github.io. It's also in the end of our talk. And there you get there's some packs and the papers linked and you can also download it. It's published on GitHub. And so far you have to install it on your own as a developer in the developer mode. But then you can, yeah, later on you might easily install it via the marketplace, this extension marketplace or something. We are in the process actually to publish it there. Yeah, that's great. So at some point maybe you can just go to the store and install it. Yeah. That seems to be pretty important nowadays that you are on some kind of store. Otherwise you just don't access interesting developments. Not sure if I really like that. Yeah, yeah. So fun. I mean, it was a research project. So fun. We now, since we thought, hey, this really great results. Let's publish this somehow and let's people use it. And yeah, for us, there's no really, I mean, there's the only incentive to give it to people is that they get awareness of fingerprinting. So we don't record anything or something that would make this tool super useless, right? Yeah, we thought about maybe studying this or something like this, but it's again tracking of users. So we don't want to do this. So yeah, that's why we have to give it out and get some trust by people that they also say, okay, I installed this extension, which could just basically see everything I do on the web, right? Yeah, of course. The conflict. Yeah, but I think, you know, just enabling people to recognize the problem is like the first step to a solution. It's something you always say, recognizing problems the first step. And maybe we can get some traction. We had so many talks on the stage already that showed like, hey, there is a problem. And in the old spurs of two what maybe something something happens out of this. Yeah, true. That would be great. Yep. Also, I think, yeah, that's one of the key strengths of FPMON. It can visualize something which you don't see usually it's too quick. And it's too hidden to see it. And with this tool, you can make it visible. Great. One more interesting question. Can browser plug in fingerprinting be circumvented? I'm not sure. Yeah. Yeah, so you can also finger print us on other layers. You were breaking up again. Sorry. So other layers are not protected. We only target JavaScript fingerprinting, which is from my personal opinion, the most important one so far. You also have something like TCP fingerprints, for example, but they are not as sophisticated as JavaScript fingerprints. So then you better just track the IP or something like this, right? So, okay. And with JavaScript fingerprinting, you have so much. There's so much weird functionality and kind of side channels to figure out what's your audio interface, what's your GPU and so on. And that's the weird thing about JavaScript fingerprinting. It has so much access to your device. And quite interesting, quite interesting. You mentioned IP fingerprinting there, how this will turn out with the growing or hopefully growing adoption of IPv6, where essentially everyone has a non-matted IP. Yeah. And you know, this deciding behind the nut gateway in the past worked maybe quite well if you're like at a university, then many are using the same IP and this doesn't work anymore in the future. Even with the privacy extensions for the time that you have to IP, you can be tracked through that IP probably. Yeah. One more question is setting the language or the localization for your web browser to English or some international thing, international defaults useful so they can maybe track you by the specific language that you have enabled in your browser. I mean, I wouldn't say it can protect you because it's just a single feature out of, let's say, for the So it's too little of a change to make you invisible. Yes, it depends on how sophisticated their technologies and if you think about, let's say, AI technology and so on, which is probably already deployed now in this domain, that kind of small changes will be tracked. They have also to use some kind of variety because you also change your location here and there and your time might change in your system and so on. So this process has to be fuzzy in some way. I'm just sitting here but I think we've, I mean, there are more questions coming in by the minute. It seems like there's a very lively discussion in the ISE chat so if you are not there yet you might want to join them and I'm not sure Julian, will you be taking a look at the ISE chat? I'm at the ISE. Yeah, great. So you're already in there and maybe you can answer some questions. It's too much to follow on quick. Ah, yeah, yeah, we can do it like this. And then I guess we'll somehow wrap it up here. And thank you again for the talk. Very interesting introduction and also thank you and your working group for this tool. More weapons to defend ourselves are always great to have. And yeah, let's wrap it up here. Thank you.
This talk will be about FPMON, a browser extension that shows you where, when and which browser fingerprinting method is applied against you. You can use it to test your favorite websites and check your own services for 3rd-party fingerprinting scripts. It can also be used to test various browser privacy tools. Tracking users is a ubiquitous practice in the web today. User activity is recorded on a large scale and analyzed by various actors to create personalized products, forecast future behavior, and prevent online fraud. While so far HTTP cookies have been the weapon of choice, new and more pervasive techniques such as browser fingerprinting are gaining traction. Hence, in this talk, we describe how users can be empowered against fingerprinting by showing them when, how, and who is tracking them using JavaScript fingerprinting. To this end, we conduct a systematic analysis of various fingerprinting tools. Based on this analysis, we design and develop FPMON: a light-weight and comprehensive fingerprinting monitor that measures and rates JavaScript fingerprinting activity on any given website in real-time. Using FPMON, we evaluate the 10k most popular websites to i) study the pervasiveness of JavaScript fingerprinting; ii) review the latest fingerprinting countermeasures; and iii) identify the major networks that foster the use of fingerprinting. Our evaluations reveal that i) fingerprinters are present on many popular websites with sensitive contents (finance, news, NGOs, health, etc.); ii) they run without user consent and subvert current privacy regulations; and iii) most countermeasures can not sufficiently protect users.
10.5446/51960 (DOI)
Different Music Hi everyone, welcome to Chaos West at RC3. We've got our next talk now. It's going to be about Alexa, the home assistant and how Alexa is watching us, what Alexa knows about us, and also about the accidental things that Alexa doesn't actually or shouldn't hear about you, but does still record due to accidents. So we've got a Svea here. She's a recurring speaker at C3. She's given around three, four talks so far, so we're looking forward to another one. That's always a good sign when people have been here before. We've also got Maximilian Goller. He's at the Max Planck Institute for Security and Privacy in Bochum, and Lea, she's from Rohe and Ibochum working on IT security. The talk is going to be pre-recorded. It's going to last around 40 minutes by Svea, and then afterwards we've got an extended Q&A session with all three of them. So as usual, we're very happy for you to ask questions on IRC in our standard RC3-CWTV channel and on Twitter or Mastodon under the hashtag RC3CWTV. Yeah, so let's start the talk. Hi, so nice to be here at RC3, the remote experience of my talk. And I'm glad that you tuned in and that you decided to listen to my experience and to my research. My name is Svea Eckert. I'm an investigative journalist with ARD and ER, which is Germany's biggest public broadcaster. And I'm basically into tech. I do everything which has to do with privacy and security and also with data protection. And last year, I did a research and I definitely want to share my experience and all my data and stuff I did there. And I did this research for around six months, decided in 2019 to collect Alexa data. So I had an Alexa in the living room for six months and wanted to know what, yeah, who is listening, how much is Echo or Alexa collecting, and what is happening with the data or what can you do with it. And this research, it was published at Steuerung F, which is a pretty young format with the NDR. And you can also watch it there, but stay here because this is the background talk to the research. So you will definitely gain a lot of more insights here. Okay, yeah, then let's dive in. So first of all, I think it's pretty important to know why you definitely should look at smart speakers and at smart assistants because they are on a rise. If this would be an actual stage, I would now ask you, I would now ask everybody who has an Alexa at home, who is using Siri, but okay, this is not a real stage. And I cannot see you and probably with this audience, everybody would say, no, I don't use Alexa. But if you look at the numbers, then it's definitely on the rise. And in Germany, it's one out of four. And in the US, it's three out of four people. And in China, it's even more. So smart speakers, smart assistants are here to stay and they are growing. And that's even more reason to look at them. Because when you look where are they, then you would see that Alexa is everywhere. It is in the car. It is in TV. Maybe you know Amazon Fire TV. And it is in your shower. It can be in your shower. So smart assistants are implemented and many, many more things in our common life. And we get in touch with them. And so I think this is definitely the reason to understand, versus of all how they work and then what they do. So let's get to the first part. How does Alexa work? Alexa, work. Alexa listens all the time. I think this is very important to know and to understand that Alexa is always turned on and it is listening always with an internal chip. And it also is constantly sharing some metadata with Amazon. It's not voice data, but as researchers found out, Alexa shares information in regular intervals. And of course, Alexa is always looking for updates. But back to the voice data. As I said before, Alexa is always listening, but of course it's not always uploading voice data. It is the internal chip. And when the internal chip understands the word Alexa, which you can submit as a wake word, then data, voice data is recorded and transmitted to Amazon servers. Only when the internal chip has identified the correct wake word, then the connection with Amazon is established. Now you would think, that sounds great, huh? But as anybody knows who is using a smart assistant like Alexa or also Siri or Google, everybody knows that who is using that actively, that this is not always the case. Because Alexa or Siri, they turn on more often in your daily life than you would like it. And they turn on when you actually don't want them to turn on or if they understand Alexa, but it was not Alexa. And I definitely want to come back to this issue, because then it of course gets interesting, because probably this is the data you don't want to share. And I will go back to this when we look at my own data. To investigate this matter, I conducted an experiment. I installed on Alexa in my family's living room, as I said in the very beginning, and collected my own and my family's voice data for six months. I started in November 2019 and then ended in April 2020. And I want to start and share with you some misrecordings. And to do so, I think it's pretty good for you to get first some small understanding. Because of the GDPR, there is a function in Siri, which is a function installed in your Amazon customer account, where you can look up all your voice data, all your recorded data. So this is pretty easy. I think it is in the sub, sub, sub menu. But anybody of you who has an Alexa at home can do this and look, okay, which voice recordings does Amazon have from me? So I went on and looked into it, and then you find a lot of, yeah, audios. And there are also audios, and they have a small label on it. And in the label that is written, this audio was not meant for Alexa. So of course, this is where the audios I looked at them at first. Then they heard some creepy things. So let's tune in to one of them. Let's play that. Check it out. Okay. You hear this now for the first time. I hear this like for the hundreds of times, because this is all of them. I started to record this talk, but I don't know what it is until now. I think it's maybe television. And there was some word in this television series or in this movie who has triggered Alexa, and then she's recording it for a couple of seconds. But let's tune in into the next sound and see what she recorded. Here. You hear here is family chatter and blah blah of kids in the background. And I think Alexa always trying to understand something because you hear the signal like did it in the background. I don't know if you got that. So I think Alexa is trying to hear the wake word and is not identifying it. And then it's recording and recording and recording. So I know big secrets told here. It's only chit chat and blabber. But anyway, it was kind of creepy for me sitting there looking at all this data from six months ago and hearing some chit chat or some strange voices from a movie I might have seen in this time. So let's turn to the last one. Yeah, cooking sound. What has a cooking sound to do with the wake word? Why did she record that? I will get back to that and explain it in a couple of minutes. But first of all, let's go and dive deeper into my Alexa recordings. Yeah, I mentioned already that I have a family and I have a little daughter. Kids love Alexa. So this is what they do when you are not present. Alexa, tell me a joke. Alexa, I'm going to bring you the gummy bears in the chocolate oven. Alexa, by this, by that, pretty common. If you can use Alexa as a shopping list then you can add things on it. So this is what kids do when you are not in the room. They are ordering stuff like chocolate and gummy bears. And the last one also, very typically. Alexa, putzma. Okay, so if you are not present, kids are doing stuff with Alexa. Alexa records it and sends everything to Amazon. This way, some of the probably more unusual things I saw in the data, these are the very, very common things I said there for hours. I listened to every recording I had in the last six months and this was what I heard nearly all the time. You can also turn the light on and off with Alexa. I connected Alexa with the living room lamps. So you can say, turn the light on, turn the light off. Do you know where I'm going? Okay, I played the last one, then I'm getting to this. Some music with Alexa. Okay, so at the first glimpse, this functions, they may sound boring in this all context because we were talking about when is Alexa kind of spying at me, when I'm doing like super private things or creepy things and Alexa is recording it. But this normal boring audio stay also pretty interesting because everything comes with the timestamp. And now let's think of metadata. What you can do when you have the voice command and you also have the metadata, the date and the time. And what you might can do with it, like reconstruct, deconstruct a day or deconstruct a life. I was very curious about this. This was the reason why I gave all the data to my lovely colleague, Marvin. Marvin also works for NDR and he's there with the data team at NDR. He's pretty good with data. And I brought him a stick and said, okay Marvin, I trust you, I give you all my Alexa, my family's life is now in your hands. Please do your magic. I didn't do it by myself because I had this idea when I give this to Marvin, then a stranger is looking at my data and I found this pretty interesting thought because Amazon is a stranger too. So what could a stranger find out about me on behalf with my data? Let's go back to him later and see what he has found out. So meanwhile, let's come back to the accidental treatments and dive deeper into Alexa and Echo and the life of smart speakers. This is where these nice guys come to play. These are scientists from the Royal University, Buhum and the Max Planck Institute. For security and privacy, they have conducted a pretty interesting experiment to find more about these accidental triggers. I visited them, I filmed them, I also had long interviews with them. And later on, I managed to ask two of them to come, also come to our Q&A later. So if I now don't cover their experiment and all the deepness, you will later see the results of the experiment. They also get the chance to ask them about it. So let's dive in. What did they do? They built a living room, but they simulated a living room and they put a lot of smart speakers in there. So there was an Echo, there was a Siri, there also was a smart speaker from Google and from other players in the market. And then they started to play TV series. Like, how I met your mother, like, Stranger Things and all popular and common and favorite. I think Game of Thrones also was there. They played it like days and days and days and nights all the time to see when these smart speakers made accidental trigger and which words or which noises, which sounds make them wake up. And how often? Because they wanted to find out which are the sounds, which are the noises, and they also wanted to find out how robust are they. So this was pretty interesting to see, which smart speakers have a stronger wake word, which are weaker, which are waking up more often, and they shouldn't. And they had pretty interesting things. So if you want to learn more about it, you should definitely also go to their website and check out all these nice examples. And in the end, they did it in English for nine days and had 314 accidental triggers. And in German, they also did some nice things like Traumschiff or the Tagesschau, which is the biggest news show in Germany. They did it for seven days and had 168 accidental triggers, how they found. And they also did some other cool stuff, like opening up an echo smart speaker, doing some reverse engineering. They really tried to get deep into the material and understand the smart speakers very, very well. They had some of these wake words. They were also some little bit funny, because then instead of Hayesiri, there was Daikiri, who was a Daikiri bitter. One Daikiri please from Traumschiff was a little sequence. And Daikiri, I think you get it. It's very similar to Hayesiri. And so the smart speaker gets the sounds or is mixing the sounds, especially when accents are used. Then the smart speaker is getting this through another. And then Daikiri, Hayesiri or Botswana, Cortana, I think you get it. There is a resemblance and the smart speaker mixes it. And then it turns on and sends the data. The same thing is, of course, something like Alexa and Alexandra. I think all these things are very obvious. But what is with the cooking sounds, what is with the children's blah, blah, what you already have heard? So why does he the smart speaker is opening up a connection? Why is it triggered? Okay. They had this phenomenon too. They had the Bupa-Tala-Shvibh-Ban. And you might actually ask why does a smart speaker triggers at Bupa-Tala-Shvibh-Ban? And I definitely want to explain it to you. It is, you can see here, Alexa's on-device wake word detection is based on a neural network. It decides if it has heard the wake word or not. And as maybe some of you already know, an artificial neural network works and learns not like a human. Instead, it is fed with millions and billions of samples and decides with every new sound if it is the wake word or not. And if the neural network never has heard in his life before the phonemes of Bupa-Tala-Shvibh-Ban, then it is possible that it confuses them with the wake word. Okay. Now to the crucial point of all of this. To train this big, this neural network, the big tech company, and need actual people to listen to it, to the accidental triggers, but also to all the other stuff, because there have to be people behind all of this to correct the network and make it learn and make it better. And now we come to the point who else is listening. Here you can see on the right side, there's a big Bloomberg Business Week title. Alexa, what's privacy? And I really like this title because you have this ear inside of the smart speaker. And they brought a big story in 2019. I think they were the first ones who brought it so big that people, actual people are sitting behind this and that there are people doing transcriptions and that there are people training the neural network. I also managed to talk to several transcribers for this research and we will hear them, we will go to them later. And the first glimpse, you may think, okay, but this is such a small proportion what they listen to. If you ask Amazon or Apple about this, then Amazon tells you, it's only 2% on voice recordings, which we are using for human transcribers. And if you are asking Apple about this, they tell you the same. They are saying it's only 0.2%. But when you look more closely, then you see that only for Apple, this are more than 350 million voice recordings per year, only Apple. So this is quite a lot. Then let's go, meet the first one. I did a big research, I approached them and I also managed to speak to a couple of them, but to go on camera, giving an interview is always a different thing. And here the woman we will hear now, she worked for Amazon in Aachen, which is a German city in the Midwest. And she worked there from nearly from the beginning for the German office and worked there a couple of years, but not until today, so she's long gone. And she only wanted to speak to me anonymized because she had signed a contract that she never ever will tell anything about her work. So that's the reason why when we look at her now, she won't see her face or hear her voice. Okay, let's go. She quit in 2018 and I think it's pretty important to say at this point that she never knew who was talking. So she heard this voice, but there was no identification number, or this is your Amazon account number, or a name, or something like this. Only when people are speaking loudly the name into the smart speaker, then she knew, of course, this is this or that person, or she told me there was a person who told the smart speaker credit card numbers, of course, she had his credit card number. But usually it was a very anonymous job, but nevertheless, she felt more and more uncomfortable because she got this feeling that she's listening and tuning in to actual people's life. And she felt uncomfortable about this. And I think I have to add up here that this case, this were contractors, they were not paid so well. They there was not a very good atmosphere in this office. They were like human machines. And so I think this also might be a reason why this work was not very fulfilling. But okay, let's go on with this story. So it's not only Amazon doing this, but also, of course, Apple, Google, they all need to do this to train their neural networks. And also with Apple, I found a girl or a young woman who was willing to talk to me about her work. And I think in her case, to get a better understanding with Siri, it's a little bit different than with Amazon's Alexa because with Siri, you also do SMS dictation, you do email dictation, some lawyers, doctors, they are using Siri to dictate notes. And this is the reason why with Siri, the soundbites are maybe much more intense than with Alexa because Alexa you're using for daily commands. And I think the most intrusive thing is if there goes a line up and you hear for you hear some strange voices for 10 or 15 seconds. But with Siri, it's a little bit different because you use it for all these transcribing things. And this is the reason why she heard a lot more intrusive things. Okay, let's get to her. What I personally heard were, for example, so conversations with doctors or where someone I take with the boss had talked in a meeting. Sometimes it sounds like someone is fighting in the background. You heard sex. Why did you hear sex? You don't have to plug it in your phone. The people might have it next to it. It sounds like the trigger, the trigger word. It then takes off and thinks there's something. Yes, and then there was of course the SMS. They were exciting too. When people really told you, you knew the voice I hear now is the one that pushes his wife. How was it for you? It was a job. In the end, of course, it was a job to listen to other people's voice messages. I talked to her, but I also talked to another Apple transcriber and he told me basically the same. He also said that it felt more and more uncomfortable to listen to all these very, very impersonal messages and to get inside in other people's life. Of course, this is kind of creepy. Of course, I and also other journalists asked Apple about this and they shared this statement with me. This old statement, it is from August 2019, when they said, okay, we have to improve serious privacy. They definitely changed some things. The important thing I have to point out in this point is that when you now purchase a new Apple device, then they are asking you a very small, very tiny checkbox. When they say, do you want to help Siri to improve? If you click okay, then your data is transferred to transcribers who are doing the grading. This is the terminus for what they are doing, the grading. They are correcting the neural network and then you have to fill in a little checkbox and then they do it. But it is an opt-in anyway. So there was some kind of improvement made with Apple. So, of course, these accidental recordings are very intrusive to listen to and not only the accidental, but also the SMS and all the other messages. They are very intrusive to listen to. But in the end, they don't give a very deep insight of a person's life because it's only very fragmented. So, this is where the metadata starts to shine. And this is where I want to go with you now to my Alexa experiment part two. I think probably you remember Marvin from the very beginning of the talk, my very nice NTR colleague. And if you recall, I gave him my Amazon Alexa data, not only the voice recordings, but voice recordings with timestamps. And after analyzing them, he got to know me and my family very well. And so will you because I want to show you at least the main key findings he had out of my Amazon data. So he prepared a little data expose for me and I met him. This was the first thing I saw. I saw it is as you probably some of you know, the Snowden book permanent record. And this is what you wrote there. Look, here is a sad Snowden staring at you. I think I bought it as an audio book and this is why it was listed at the Alexa recordings. Okay, but enough with the funny stuff. Yeah, this is Marvin showing me my home, showing me my house. And actually he got it nearly right or surprisingly right. He, yeah, we have two lamps in the bedroom. We have actually four lamps in the living room, but they are working as one. This is when I say Alexa living room on or off, then it turns four lamps on and off, but he got that right. And all the other rooms he got them right too. So this was for me the first little bit creepy point that you can reconstruct my home, my house out of my Alexa data because I have connected all the lights with it. Go on. Yeah, what is what can you see here? The blue bars you see there. This is the timer and the red ones. This are all the other Alexa voice commands. And you can see how often Alexa was activated daily. So he also was, he could definitely see the first lockdown in the data in March because this is a time when you see that we have been at home more often. And before, and you could also see weekdays, you can see the weekends. What do you think the weekends are? Yeah, you can see the weekends because there the timer is is always high. So maybe this are the eggs for breakfast or this is, I know it. I like to bake cake. So I think this is my week ends baking. You could even see if I haven't baked a cake, then maybe there's only one or two timers set, creepy and not cool. So you can do with this Alexa metadata. But this is not the end goes on. Let's check out the last slide from them. And lots of dots. This dots, he tried to reconstruct my daily routines. He tried to see when do I get up? When do I go to bed? You definitely can see this with the Alexa because especially in the wintertime when you turn on the lights or you put the lights off, then you can see this in the Alexa data. And you see here on the left side there on the epsilon axis, you can see that I get up between seven and eight. And that I go to bed pretty late around 11pm between 11pm and midnight. So this is also something you can see with my metadata. And I'm pretty sure if you would have take more time and more effort and start to combine the data or look at other data from my life, you get a pretty good idea who I am. And I think it's definitely worth to think a step further because you also buy stuff with Amazon. Maybe you have an Amazon Kindle, then you read books with Amazon. And in the end, they get a pretty good idea what kind of person you are. And I think this is a very huge and vast data collection in the head of a super-billion tech company. Let's jump to the conclusion. What does this mean? What is the idea in the end of all of this? Of course, you could now personally say, okay, I ban all the smart speakers. I ban all the smart assistants out of my life. And I think probably today you could do this. You can deactivate smart assistants on your phone. You can never buy a smart speaker from your home, avoid it in a hotel. When you are at a friend's house and he has a smart speaker, you probably can deactivate it. But if you think that through for me, I thought this is in our society. This is not possible anymore. I think we are back at the beginning. Smart speakers are on the rise. They are here to stay. It is a technology which is involving very fast. And also it's a very interesting technology because if you think today, many people, they are always staring on their screen. And the smart assistant, smart speaker technique is going away, is working in a way that people don't have to look on their screen all the time. And for that they have to learn. But what is the price for all of this? What is the price? Because in a way, people who are using this technology are unwillingly guinea pigs and the whole setup for the testing and they are giving up their privacy for that. So I spoke with the data commissioner here in Hamburg about this problem and asked him what they are doing about this and what they are doing also to give boundaries to the big technology. Companies like Amazon, like Apple. And I must admit, the answer was disappointing because I definitely like GDPR and I think GDPR was a pretty good idea and it was a good solution. But GDPR is not helping here because, for example, Apple is sitting in Ireland. Amazon is sitting in Luxembourg and it always depends on the data commissioner's offices there if they are doing anything or if they are pursuing anything or pushing anything. And if they don't, nothing is really happening. And there's always the thing with the opt-in and opt-out version. And for Amazon, it's still that you are opt-in when you buy it, when you set it up. It's always an opt-in and you have to opt out, but I will get back to that. So the data commissioner did not really help me here and I went on and asked again, Apple and Amazon about this and also the answers I already told you with Apple. They say, okay, we changed the way. It's an active opt-in now, which I think is kind of good. And they also said, yeah, we are not using contractors anymore. This is now done by Apple employees. What I personally think is kind of a wish-wash because who is now listening to my voice recordings? I think this doesn't matter for most of the people if they are Apple employees or if they are working for a subcontractor. But these are basically, I think, the most, you know, or two of the most important things they changed in 2019. With Amazon, it's a little bit harder because they did not change it into an active opt-in option. You are opt-in when you buy an Amazon Debovia, when you set up Alexa, you fill one checkbox. I agree with the terms and services and stuff and everything and then it's gone. If you want to change, then that you have to go to the sub, sub, sub, sub menu and then you can opt out of the transcription mode. So if you don't want to have that, you would now go there and check out. But the data, of course, will be transferred anyway, but there won't be any human transcribers working with that. And they also said, okay, it's only Amazon employees working with your data. So it was not very satisfying, all these answers and I was looking for a model which maybe gave me a kind of a glimpse what could be a better solution for all of this. What can train an neural network and is not such a big deal for your privacy as a user from a device. And I found this and I like this very idea very much. This is a common voice project and you can give your voice data as a donor. It's basically like a crowdsourced project. You can speak in words, sentence, read a book and your voice is recorded and analyzed and then it is meant to be a very big database with donated voices and not with harvester data from many people all of the word. So I like this idea, but I also read the rumors that due to Corona and stuff, maybe this project will be terminated, but I haven't found an official statement about this recently. But I like the idea very much. So I want to come to my final remarks and to my closing. So I think the first point and the one of the most important point to see and to understand with this talk is that smart speakers are collecting a lot of data. A lot. As I already told you in the very beginning, Alexa is always listening and also Alexa is periodically building up a connection to Amazon servers to look for updates and maybe to share some data, not voice data, but other data. This is something you can see in the logs. Then you have all these timestamps, you have all this metadata, so definitely smart speakers are collecting a lot of data and you can do a lot of things with it, what you have seen on my example. The second thing I found pretty important is that metadata is even more intrusive than voice data because on the first glimpse, you may think, oh, she listened to sex or oh, she listened, somebody is betraying his wife. No metadata is the key. This is what I saw in my experiment that I thought, okay, if you have all this metadata from me, you definitely get a very good picture of me as a person or as a family member. Way more better picture than you get maybe from the short voice comments I'm giving or on the accidental triggers. And last but not least GDPR is at the moment not saving us here. So I think definitely there have to be better solutions for the future. If we want to go on using this technology and if we want to go on developing this technology, there have to be better solutions. And I definitely think not only we as users are on charge and trying to protect ourselves, but also politicians are in charge, are asked to do anything about this and to find good boundaries so that we are not the unwillingly kidney pigs in the whole play. Yeah. Thank you so much for listening. And before we go to the Q&A, I definitely have to thank many, many people who are supporting my work, who are working together with me, sharing ideas, sharing data, sharing experiences. These are also the guys from NDR, but also the scientists in this case from Ruhen University and Max Planck Institute. And I'm very happy that two of them will join us now for Q&A. Thank you so much and welcome with me together, Max and Lea. Thank you. You and the Q&A now. Great. Thanks for that excellent talk. It was amazing to see what you did there. So now we've got actually not just Svier here, as I said at the beginning, but we've also got Lea and Max. I'll just let them introduce themselves. That's probably better than me saying anything. Yeah. Okay, I will start. So I'm Lea and I'm working at Ruhen University at Bohem in IT Security. And in the research papers Svier introduced in her talk, I was focusing on the deep learning and the speech recognition part and was looking into how the characteristics of such accidental triggers are and how we can craft such triggers by looking at the phone, at a phone-based distance of common English words to the respect of vague words. Okay. Yeah. And I'm Max. Hi, I'm a postdoc at the Max Planck Institute for Security and Privacy in Bohem. I'm mostly interested in usable security and privacy and in this project, I was responsible for the technical setup, meaning that we jail-braked an Amazon Echo speaker, reverse engineered the first-generation speaker and what is happening on the device, adding light sensors, observing the measurements and obviously watching hours of Game of Thrones. And then there's Svier now also live. But she already introduced herself, I think. I hope so. Yes, I'm Svier. Hi, you know, I'm going to be proud of me now. I'll repeat. So we're now live and you can ask your questions, ISC, Twitter and so on. We'll still read them and collect them. And then I guess I'll start with our first few questions. So the first question that was asked is, do you use Alexa, etc., yourself? I think for me, it's kind of obvious. So I used Alexa a lot. Now we have Siri at home. So the research didn't change it all. It didn't change everything. So I'm going to start with you, but yeah, Max or Leah, what about you? Yes, so I don't use Alexa or any other speech, ASP assistant. And yeah, Max. For me, it's the same. Yeah, I see some usability benefits, but I don't have a smart home. So I don't know what to do with it. And just a timer. So you're still privacy preserving to have such a device at home. So it's so so you even though you now know a little bit more, it has, yeah, it hasn't changed your view a lot what you've uncovered. Have you found any, is there anything that surprised you in the course of your research, something that you kind of didn't guess before? I think for me, of course, the one thing is guessing, you know, and the one thing is thinking about something and it's always different when you try something on yourself and when you listening to all your recordings six months ago when you are watching the metadata or better if some stranger comes and tell you, oh, you're going to bed really late. So, of course, I anticipated everything, but then making the experience is is deeper and is of course creepy and I really, yeah, especially the metadata experiment experiment. I thought about it a lot and in the end I, I liked Siri better because because Apple's business model is not so much the data side as the as it is maybe from Amazon but yeah I think it's definitely a trade off what you have to do here between, yeah, between the comfort of having a smart speaker at home and a smart home and on the other side of course the whole privacy issue. And the two researchers. And for me, it was very exciting that you can actually reproduce those accidental triggers. I got a push notification on my phone when we first detected by brute forcing or playing videos of TV shows newscast and so on. When it first triggered and then I am at my home I did have an Alexa at that time to and then I tried it myself started Netflix played this episode in this very minute and it actually triggered and that was, yeah, obvious but also very surprising for a researcher. It's reproducible. Did you, did you do something kind of really to get a closer access and to the to Alexa than what you would as sort of a normal user to kind of hack the device something that you know. Yeah, that's more about that. It's not our nothing we did, but we followed tutorials online because there was some previous work on that. Usually academia does not try to hack devices, but in this case we did and to investigate what is going on on the speaker on the Amazon echo first generation speaker so we wired or jailbreak into the device we had a root shell and then we hook the open SSL library and dumped everything with wire shark that was sent to the cloud and investigated what was what is going on on the device so what happens if there is something like a near miss. That is something that sounds like Alexa but is not Alexa. And what happens if the device detects the wake word Alexa but in fact, the cloud decides it was not Alexa and stuff like this it was unclear. Is it really indicating why it's led ring that it's listening or not or uploading data or not, and stuff like this. And what we can say is that whenever the LED ring is on, there's never send data. If the LED ring is not on. Yeah, so, so even if there's an accidental trigger which is on the local device, but that is not verified in the cloud. The ring will turn on and within one or two seconds, the cloud will decide if it wasn't trigger or not. So there will also not be uploaded much more data than the one to two seconds. So, so, so for Alexa the processing doesn't for the wake what doesn't just happen on the device itself. There is some data that is always transmitted in the cloud, even if there is a near miss I think that that was a question how do you know that Alexa is only after the keyword and I think your answer is, well, it's not it's correct. So first of all, as said in the video or in the stream, the echo speakers always listening always recording, but only uploading data to the cloud. And then it assumes that the wake word has been detected and for this, it's a two part process. There's a local neural network on the device itself. It's rather tiny, not so good. And then when it has a certain value of about 0.57, which is in the range between zero and one, and it decides to upload data and there's a constant speedy connection to the Amazon cloud. So if the cloud is threshold is met, it will upload or start uploading data. And then the cloud neural network will check this as well and verify whether the wake word is really there or not. And if it's not, then it will send a stop signal and all of that happens within one or two seconds is very fast. And the cloud network is that the on device network is more lightweight so you cannot have the huge neural network as it is in the cloud on the device. So they use the second network in the cloud to verify, to basically verify if it was recognized. Do you think that's an actual limitation because I mean you could just put a better CPU inside or a TPU or something that could do it or is it just an excuse that they say well sorry we just you know we'd want to be absolutely certain and so we send it to the cloud. I mean, do you have a feeling about that. I'm not calling no information but the project started in 2011 or at least the code that we have seen includes libraries that are from that time and nowadays Alex is in your car and even in your shower and and the echo speaker could be quite powerful today but it was not at that time. Nowadays, we have many devices where Alex has built in your smart TV and and so on and I guess there's a real technical limitation you cannot put the same neural network and into such a small device. Right, but I mean one could possibly imagine if, for example, you have a pretty powerful hub or your phone or something that the data could just be sent to your phone and process on your phone because the phones are getting better and better. The chips are now so powerful that, you know, correct. So there's a very unusual usability trade off here that it adds latency if you first send this data to your phone to verify and make sure and stuff like this. So there are many options to improve the privacy and the whole process, but it's always the question whether your customers are interested in this, whether it's worth spending this money in development and so on. Yeah, Alexa, it's pretty cheap. So I think it definitely would hire the cost for the speaker and I think the gain of Amazon is clearly to push over echo speaker over over the whole world and to increase the speaker because I don't think they make the the big money with the hardware and it's more interesting to collect all the speech data to train the neural network. I think this is this is one asset here. And on the other hand, of course, it's the thing with the online or with the voice shopping and TV watching here, which is the other other asset. I mean, that's also a very easy way to collect the data, which they on the other hand use or will use to improve the model to so otherwise, yeah, for them, it's easy way to get the data. So someone asked here, I think it's related somewhat to that. What do you think about not paying with your personal data and learning to implement the technology of speech recognition yourself. There doesn't have to literally be a single bit of data leaving your home. You know, one might have the knowledge to implement such a system as long as you have the knowledge to implement the system yourself. So I think that there's a thread around open source. It's something that's very popular and see three obviously. Have you have you looked at such open source solutions at all. I've seen some mentions of mycroft dot AI or beyond common. So, I think that's a very big project. Yes, there are open source speech recognition software. So we have to include that into our studies because we're focusing on something else, but in general, it's possible. It should be possible. You can build your own own speech assistant and yeah, which runs offline or most of the time offline, especially the recognition. Yeah, I guess the first commercial implementation of this is this on device speech recognition in the newer Google Pixel phones. That is completely offline so it can definitely be implemented even on limited hardware. And there are some existing projects that but none of them are commercially active or sell units as far as I am aware. So there was a critical question. I just read it out and add something myself. I think so that journalists have been accused several times this year of deliberately exaggerating stories for clickbait. What precautions were taken in this research? I asked this because I already found some safeguards against accidental listening by reading the manual. So I don't know whether you know that person hasn't clarified, you know what they found. Maybe they can share an ISE and I'll come back to it. But did you read the manual and did you discover something that Amazon said they did and was actually wrong? Of course. I read all the, what is it called the data protection agreement where you said you're okay on. And this is also is part of the documentary. Very interesting reading, seeing me reading all these things. No, Amazon is not hiding anything. They are writing it in her. They are writing it pretty clear. It's only nobody reads this stuff. I think this is the thing. And I think with Amazon, especially what I think as problematic also in a GDPR sense is that it's not an opt-in. So with Apple after 2019, they have started to do the transcribing as an opt-in option. So if you set up a new device, they will ask you if you want that somebody, if you want to help Siri to improve learning. Amazon is not doing this day. You can opt out, but you cannot opt in. And I definitely want to say about this clickbaiting thing. ARD and ER is public television broadcasters. So we are not commercially funded with ads or something. So I would say, I'm, you know, like when I do research, it's not about clickbait. It's more about getting insights and trying to get to shed light on an issue and to bring transparency into an issue because I think many, many people, normal people like smart speakers and have them at home. And I think it's always pretty important to know what I have in my home. And I think it's not enough to write it in, you know, in the sub-sub-sub menu somewhere in the little letters. And so this, I think it's part of the journalistic approach to bring light in it. And also, I think part of the scientific approach. Yeah, so I think I've now got the clarification. It was exactly as you said, by reading the manual, they didn't mean, as I thought, reading the developer manual, but just the user's privacy options and whatever they take. And you answered, I think, that question already. So we think we're happy there. The, then there was a security question, which is, I don't know how much you know about this. How difficult would it be for an attacker to deploy a different wake word model to the device to get it to collect more data? Mexican, you answered? Yeah. And the wake word thing is quite interesting, but you don't need to. It's not that complicated. In fact, you can, once you have physical access to the device and you have a root shell on it, you can mirror out everything that is set all the time. So you can dump it via a netcat to a remote server, for example. And there is our hyperlink in the paper on the website that points to a presentation that is exactly doing this. So they are wiretapping an Amazon Echo speaker. But first you need this physical access as far as I'm aware. And then you can do this. And regarding changing the wake word model, I think there's something for Leah. Yeah, in theory, so what we have seen, we were looking into the device, it should be possible. So we haven't done it because it will be a lot of effort and probably painful. But I would not exclude that it's impossible if you put enough effort into it. Yeah, as far as we are aware, it's based on the open source framework Caldy. And now 2011, or it's based on a 2011 version of the code and Amazon will have changed a lot or will have changed some essential things. But yeah, it's not encrypted. So you can have a look into the binary code and try to reverse it to get, yeah, to to change it into your model if you really would like to. So one thing was that one problem is maybe that the wake word itself is not accurate, right? So if the wake word was perfectly accurate, people would feel more comfortable that it's not triggered at inappropriate times. Did you, I think you told me you did some research on whether there are potentially better wake words. And maybe you can say a little bit about the reason why the wake words are what they are and why they are potentially suboptimal and what you think could happen to make them better in that sense, just an incremental improvement maybe. Yeah, I will start Leah you can join in. Our first idea was that having a two word wake word like hey Siri is better than just Siri, but that's not true. And regarding the history behind the wake words, they are mostly commercial driven so there is no scientific reason why it's called. For example, Alexa. For example, the wake word Amazon is a pretty bad wake word and similarly computer. And one reason for that is we use it on a regular basis when we communicate as humans so saying the word computer happens quite frequently right and in the paper you can find some anecdotes why they wake word are the way they are. But there definitely there's room for improvements you could choose better wake words. But what about how the and also try to find our to find other accidental triggers by comparing the 10,000 most common English words with the wake word and yeah optimized our distance measure based on the phones of the word. They were actually able to find. Yeah, other accidental triggers or what combinations that actually trigger an Alexa wake word. Do you know an example of a good like what's the best word you found when you did use I'm sure you try to find that out. My favorite is unacceptable for Alexa. Maybe that's unacceptable for Amazon as a work work maybe. Yeah, it was quite interesting. We also found election election also triggers Alexa. Oh, okay. So unacceptable for Alexa triggers Alexa. Yes, yes, okay. My question was, do you have something? Do you have a proposal for a good wake word? What's the best wake what one could choose that it is as far as possible from any known English word. You see what I mean. So, one easy way would to repeat the word because that you automatically increase the measure the distance to other English words. I think that's something people usually do. Correct. Don't go around and say Alexa Alexa Alexa Alexa Alexa. Yeah. Yeah, that's something we propose in the paper, a safe word so to speak, people want to have a hands free communication with the speaker and do not want to touch the mute button, which is always an option to avoid such triggers right, but then you cannot re enable it using your voice and and one potential way that we imagine is privacy mode that is implemented in the speaker. And then you have to say Alexa three times in a row and to re enable the normal operation so to speak. The problem here is more a trust issue so similar to the mute button. People simply do not trust that Alexa is really not listening. And then they do what everyone would do they simply unplug the device to be sure. And yeah. Cool. So we have, I think going in a different direction, coming back to earlier why, well, the fact that you are you're not using Alexa and Siri question for sphere. So how do you change your mind to use Alexa daily if Jeff Bezos ran for the US presidency. Hypothetical question I actually have never thought about that. It's funny question but I really. But you know, I, I really was fascinated, fascinated by the film her where Jackie Phoenix is always communicating with this artificial intelligence or with this neural network in his ear and, you know, I think right now we are really far away from all of that because when I'm communicating with Siri or also with Alexa there is a lot of things not working and so I think these devices definitely have to have to improve and to learn a lot so that they function in a way that they are really, really helping and making daily life easier but I'm definitely skeptical about this massive trade off and about the what I said in the conclusion about the guinea pig factor and I haven't or for me that there's not a real solution at the horizon. The only thing which I think is really interesting is what is said earlier that Google has already implemented a neural network on the phone which is working offline but once it works offline then it cannot improve so you see there is always this trade off right now. Even though I think federated learning could theoretically help with that and you could learn from it. So, yeah, federated learning and just possible that not that not everyone has all the data, or so it doesn't have all the information and can extract other. Part of the data and the neural network for the record recognition would then be trained in a, yeah, in a distributed way so they will send the results to, to, to update neural network, but that out. So, yeah, that's the central instance having all the information to, to get to extract other information than the, what is necessary to update your network. Do you think that sort of legal that do you think that legal requirements could change that those sorts of things are made mandatory that the amount of data that sent is reduced. That's a question someone said, you know, they're not necessarily worried so much about the people listening to recordings but just that the fact that, you know, your very private data ends up in the hands of such an important cooperation. I mean, it's like big brother in, in 1984. We really got to trust those people right now. Yes, and I think there's also another aspect we, I haven't covered this in my talk in all the details. But when you think it through and when you think you have, for example, these transcribers on the other end. And you're saying, okay, call my friend Anna, for example, and this series at get it wrong. And the transcriber has, but has access to your contacts they have access to the names not not not necessarily the numbers but to the names of your phone book, because of course they only can correct things when they know what is right. You see, so what I want to, to, to emphasize here is that it's not only the voice data or the transcription of the voice data but all the apps who are connected with Siri. So, so that data definitely is cross is crossed and is mixed together and this I think is also a very important point because always when you have several data sets from a person, then you get more and more into this, you know, knowing the person thing and I definitely think this is a, is a problem too. You have the playlist you have maybe the places visit you have the contact data. And I would like to add that vendors definitely need to work on their way of giving consent. I was away during Christmas and updated to make us big sir today. And the first thing that popped up was, oh, by the way, you can use Hey Siri and the checkbox was crossed by default that all my voice data is uploaded to Apple, and you should implement real consent forms that users have the option to consent. Currently, it's either pre clicked pre enabled, like you are signed up by default, or the UI is designed in a way that the incorrect or not so privacy preserving option is light blue or enabled by default and easier to click and the tiny thing on the lower left is no I do not want to upload my data and share my data and that's wrong and needs to be changed. Yeah, I mean, everyone's encountered I think some some pop ups on websites where it's just impossible to unset all the tick marks because you have to do 10 of them. The someone wanted to know a bit more about the metadata some hard things did you, for example, I was wondering, people often say you know I spoke about I want to get a new vacuum cleaner. They said that without you know saying Alexa before that, and then next day on Facebook and next day they see ads about vacuum cleaners is that something that you could, you know, made did you get some insight into that that Alexa might still be you know listening into those things that's a fear that people have is yeah that's 10 false stuff we can definitely reject this and Alexa is only uploading data wins when it is indicating that via the LED indicator but what is happening in the background and without LED activity is it submits a meter data for example there's a fingerprint database on the device and that is a protection mechanism to not trigger when Alexa is mentioned for example in a TV commercial and this fingerprint database is updated once a week, and it also transmit the number of times a fingerprint matched so in theory Amazon could know what car commercial you have watched recently on TV and Amazon or the echo speaker and listen to that and have decided not to enable itself because it and found a match with this fingerprint database. So the meter data is the most problematic stuff here. Yeah. And I had a piehole installed. And I looked also looked into the data but I did not do. Yeah, I did not do like an expose or something so so I could not share it in my talk because I did not do this methodically I just you know install the piehole and then looked okay. And I definitely saw that Alexa is is doing stuff they could speak it does stuff in the background not transmitting voice recordings but it does stuff max maybe you can give more insight in this. Yeah, we, for example, it reports on what Wi-Fi you are what your current spot if whether you are using Spotify or not what the current active skill is and stuff like this. And it's happening all the time and I mentioned before that in the cloud network cloud neural network decides that this is not Alexa and does stop the upload of the data. And this can and this happens within one or two seconds. And it's only that fast because they every speaker maintains a constant speedy connection to the Amazon cloud. So there's definitely a constant connection all the time to the Amazon cloud when the speaker is on. I think the speaker gains massive insight in your life. And I think that when you connect this data with the Amazon Prime with the Amazon Prime account. It is enough to submit you a vacuum cleaner or stuff that I think this metadata is enough to propose to you the correct commercial or to propose to you the right products. I think the metadata is enough and they don't need to analyze all the blah blah which is in the room. This is what I think about it. You might not be experts but do you think that it's that can you opt out of this metadata collection or is that just something that's hardwired and is that even legal and the GDPR that you like it's not is it essential. I think they would need to be some. Yeah, it's it's when you when you buy an echo speaker. And when you implement it at home they ask you do you agree with the terms of service and then you like you click agree and then you have kind of sold your soul. So it is written in there. We are using all the data we can collect. And the only thing where you can opt out is the thing with the transcribers you can opt out. I don't want that a third person is listening to my voice recordings to improve Alexa. I'm smart speaker. And did you speak to lawyers whether that's. Yeah, of course I that's fine. The data commissioner. Okay, yeah, it's a thing thing no it's not GDPR wants a real consent to such things like this so it's not okay in a GDPR way. Amazon is sitting in Luxembourg and they are the Luxembourg people it's in their office to do anything about it and they they don't. Great. All right, I think we've almost covered all the questions I mean there were loads of questions and it was really interesting and maybe just one more thing we'll post and maybe for the video later just add a little linked for your paper where you go into a lot more detail. And at some point you said you also share data raw data. Correct. Yeah, we share all those accidental triggers there. So if you want to dive into more details, have a look at the paper. Thank you so much and we've we're going to have the next talk here in 15 minutes. And yeah, thanks for listening into cars west. If you want to come back, share this video if you liked it it's going to be recorded on on it's going to be available on media ccc YouTube, Twitch, and so on so yeah, thanks a lot. Thank you. Thank you. Yeah. Yeah.
Sex, private conversations, daily routines - smart assistants likes Alexa and Siri know often more about you, than you may think. This talk shows what metadata reveals about your private life. It investigates how private conversations, wrongly transmitted, are also analyzed by human transcribers, hired by big tech. This talk investigates smart assistants like Alexa and Siri. For half a year we collected in an experiment our own Alexa voice data, analyzed and investigated it. We deeply looked into accidental triggers. These are words and sentences which wake Alexa and Siri up, even if they shouldn’t. Every time for a couple of seconds, audio data, is transmitted to Amazon or Apple. And as it is widely known, real people are listening to that. Sex, children, employers talking to their bosses – we met whistleblowers and transcribers, who have worked for the big tech companies. They improve the artificial intelligence of smart speakers for the price of private eavesdropping. The analyzed metadata revealed even more how deep smart speakers intrude your private sphere – and that in the end Amazon, Apple and Google will know (nearly) everything about you. Disclaimer: This research was part of a journalistic research, which was already published in june 2020.
10.5446/51965 (DOI)
Herzlich Willkommen zu den heutigen Vorträgen live aus der Video-Bitwäscherei hier in Zürich. Wenn ihr Fragen habt, die ihr euch zu den Vorträgen und zu den Speakers stehen wollt, dann verwendet doch bitte den Chat in Link dazu findet ihr unterhalb dieses Livestreams hier. Mein erster Gast heute ist Vardom. Er wird sein Vortrag auch Englisch halten, deswegen wechsel ich nach die Sprache. Vardom will tell us about Media und wie es sich in den Medien verwendet. Und genauer, warum wir die Media und wie sie verwendet werden. Und jetzt freue ich mich auf das zu lernen. Vielen Dank, Daniel. Hallo, alle. Es ist ein tolles und tolles Erinnerung, um vor euch zu sprechen, obwohl es remotely ist und ich dich nicht direkt sehe. Wer bin ich? Mein Name ist Vardom und ich habe ein Problem. Mein Problem sieht ein bisschen so aus. Ich consume viele Content-Media, die sich wie ein Geräusch fühlen, Content, die nicht auf mich einen Lastenimpakt hat, Content, das vielleicht meine Emotion und fühlt sich gut in dem Moment, ähnlich wie zu Junk Food, das auch gut fühlt sich in dem Moment, aber es macht dich einfach ematisch nachvollziehen. Content, das ist wirklich, wir nennen es Clickbait, Content, das nicht mehr Substanz hat, Content, das wirklich Junk Food ist. Ich frage mich oft, was würde die Welt sehen, wenn wir das meiste Content, das ist transformational, Content, das macht dich denken, Content, das hat diese Art Substanz. Ich nenne es manchmal Butterfly Content, das kann einen Butterfly-Affekt auf dein Leben haben. Ich bin sicher, dass alle von euch das erlebt haben, dass ihr etwas aufsiehen oder waschen oder etwas hören habt, das wirklich verändert, wie ihr etwas schaut oder ein neues Insight in ein Thema geübt habt. Es ist also transformational für euch. Ich frage mich, was würde die Welt sehen, wenn wir mehr dieses Content haben, wenn das Content das meiste Content ist, und nicht das Junk Food Content? In order zu antworten, was würde das Welt sehen, in so einem Fall, müssen wir uns ein Schritt zurück und fragen, warum ist es nicht heute der Fall? Warum ist das Welt nicht heute so? Warum sieht es so, dass das Content, das klingt, das klingt, das klingt, das ist der Fall, der die meisten hört? Und um die Frage zu beantworten, warum ist es nicht heute der Fall, müssen wir uns sagen, die Geld folgen und schauen, wie wir heute die Medien verabschieden. Und da sind zwei Geschäftsmodelle, die die meisten sind. Die meisten sind die Advertisement-Business-Modelle, die ihr auf dieser Illustration seht. Und ihr seht, wie dieses Modell funktioniert. So auf der anderen Seite habt ihr einen Intermediär, der ein Plattform oder ein Online-Newspaper, für Beispiel. Und auf der anderen Seite habt ihr die Advertisers und auf der anderen Seite die User und die Creator. Also was passiert ist, dass die Advertisers die Plattform bezahlen und die Plattform mit dem Geld zur User freien Service zu geben. Warum freien sie in Bracken? Weil wir finanziell nicht bezahlen, aber wir als User mit unserer Beziehung und Klicken als Plattform bezahlen. Und das ist was die Plattform zurückgibt zu den Advertisern. Also die Advertisers bezahlen die Plattform, um unsere Beziehung und unsere Klicken zu bekommen. Und was ist die Alternative heute? Die Alternative heute sind die Paywalls. Und Paywalls arbeiten so wie das. Wiederum haben Sie einen Intermediär, aber dieses Mal die User und die Creator sind split in zwei Gruppen. Und dieses Paywall Modell ist meistens mit Online-Newspapern verwendet, wo die Creator Journalisten sind, für Beispiel. Also die Journalisten, die Content für die Plattform kreieren. Die Plattform gibt das Content für uns die User und wir payen etwas zum Plattform. Der Plattform nimmt das Geld und geben ein bisschen das zu den Creators zurück. Also das ist ein klassischer Intermediär Modell. Jetzt schauen wir uns an die gute und die gute Seite der beiden Modellen. Die gute und die gute Seite der Advertisern. Zuerst ist es sehr inklusiv, dass du nicht mehr Geld payst, um die Beziehung zu bekommen. Du musst nicht finanziell investieren. Du kannst ein Teil der Plattform sein, egal wer du bist. Und du kannst auch ein User und Creator sein, um die Plattform zu erzielen und zu consumieren. Und es ist generalerweise offenbar verabschiedet. Wenn du über Twitter denkst, ist die Information nicht hinter dem Paywall. Die Information ist generalerweise offenbar verabschiedet für alle. Das ist eine gute Seite, aber die Badseite ist natürlich sehr schwer ausgewählt. Sie sind generell zu dem Klick-Raten-Business-Model betrachtet. Das heißt, dass das Geschäftsmodell das Incentiv der Verkaufung von möglichst vielen Klicks möglich ist, möglichst viel Beziehung möglich ist. Das führt also zu hoher Qualität und zu der Beziehung und zu Datenmining. Warum? Weil es einfach nicht wichtig ist, wie hoch die Qualität der Content ist, wie lange es uns klickt, wie lange es unsere Beziehung schafft. Das ist also warum wir Klickbait-Content bekommen. Weil es ein finanzielles Incentiv für die Plattform ist, um das zu machen und unsere Beziehung zu bemerken und unsere Daten zu bemerken. Das heißt, dass die mehr Daten die Plattform hat, desto besser können sie sich über das, was wir auf dem nächsten klicken. Und auch natürlich arbeiten sie mit dem Hilfe der Algorithmen, um die Beziehung zu finden, wie die Besten unsere Beziehung zu bemerken. Das ist was, was diese Plattform so addiktiv macht. Das ist einfach das Geschäftsmodell, das die Incentiv für die Plattformen zu geben. Es ist nicht, dass sie evtl. oder so. Es ist wirklich der finanzielle Interesse, das zu machen. Und das kann natürlich auch große Probleme führen, weil es auch für politische Gründe kann, auch für non aus pornographyologische Erικά Darойти seguir in der Advertisement-based models are usually centralized, so you have a central instance that is being paid by the advertisers. Now the good and bad sides about paywalls, well first it's good that the quality of the content is higher because actually someone is willing to spend money on the content. So that's one good thing and the other good thing is that a part of that money goes to the creators. So, for example, the journalists are then paid with the money that the people provide. But the problem with the paywalls is that they are not inclusive, meaning that if you don't have money, you don't get the information, you're left out. And that it also means that the users and creators are not the same. You cannot be as a user a creator at the same time. So you could not say, for example, oh well, I'm going to publish tomorrow at the New York Times or something like that. Meaning that the platform decides or the newspaper decides who is a creator and who is a user. And the paywalls fragment the web, meaning that they go against the very basic principle of the internet, the open accessibility of information. And that fragments the web, it puts content literally behind walls where only people who have money can access it. On top of that, it's not transparent. You pay a paywall, I mean you pay monthly sum to a medium and you hope that they will provide you with good content. But you don't pay for single pieces of content, you pay a monthly sum to the medium. And paywall based models are usually even more centralized because the decision lies with the newspaper or the platform, the decision who publishes and what gets published. This decision is in the hands of the platform or the newspaper in the business model of paywalls. Now I've asked the question, what would a world look like if we had more of this transformational content, more content that really has a super high quality makes you think content that provides you with new insight. So in order to have that new world, what would a business model look like to provide that? Well, it would have to combine the good things of those two models that we have talked about now and with as little of the downsides as possible. Meaning that it would have to have the overarching goal of high quality transformational content, as I said. That would have to be the big goal, the big incentive for every stakeholder in that system. And then everyone who contributes in that system towards that goal of high quality transformational content, everyone who contributes to that should be also financially rewarded, so that they have a stake in that goal. Plus, it should be openly accessible, meaning that even if you don't have money, you should be able to take part and also get the information. It should be inclusive in that sense, and it should also enable everyone who wants to also publish to be able to do that. So everybody should be able to also be a creator or a user as they choose. And then it should be decentralized and democratic, meaning that there should not be a central instance that decides what is high quality content, what is published and not. Those should be decisions that should be democratic and crowdsourced. So, that's actually what we are working on. We think we have found a solution that could be very interesting. We call it Butterfy, without the L, so Butterfy. And we call it a crowd filtering system. And you see on this slide how it would work. So at the top left you have the creators. The creators create content. And this content then goes through a filter with four different stages. And that's actually where the users are. So the people are split into four stages and they filter the noise away and they filter through that noise and they actually, the content becomes better and better and better. So each person decides for themselves, well, is that piece of content something I want to send to the next stage? Is it transformational? Does it have this Butterfly effect on my life? Or does it have the potential for that? If yes, I'm going to send it to the next stage. If not, it's not going to go to the next stage. So it's a democratic filter in that sense. Now, you see that stages three and four have a blue color. And that is because they are the ones who spend money. So they pay money for what? They pay money for the privilege of getting only this high quality transformational substantial content. They pay for the privilege of not having to filter through the noise themselves. Other people do that for them and they pay for that work. Important is that they only pay for the pieces of content, which they think have that transformational power, which they think can really have this impact on other people. So they pay only for the content that they send to the next stage. And also very important, the money that they pay is split. So the money is split and it goes first to all the people who have filtered the content for them. So that means if you are in the stage one or two, you consume everything for free. Of course, there's a little bit more noise in there, but you consume everything for free. Plus you can even make money because you get a little share. If you find high quality transformational content that the next stages are going to pay for. And apart from the people who filter, the money is also split towards the creators obviously, because they created high quality content that went through that filter. And also a small share goes to the platform for providing the infrastructure and the service. So every stakeholder, everyone in that system has an incentive of finding and producing high quality content, transformational content that has really an impact in other people's lives. Very important, you cannot come on the platform and say, oh, okay, I want to be at stage four directly. I'm going to pay no problem, but I want to be at stage four. Not possible. Everyone has to start at stage one. So it's really democratic in a sense that you have to start at the stage one and you have to prove that you provide the system with value. You have to prove that you can do that and that you are willing to do that. And only then can you go to the next stage. And that's a democratic decision, not something that I or the platform decides. It's just a democratic decision making process. And what else makes the platform democratic is that there's a separation of powers, meaning there's four different stages and none of those stages have more power than the other. People are split in those stages so no one can be at all the stages at the same time. It's different stages, different people. And that means that, for example, also the stages three and four who pay, they don't have more power than the others because they get only to choose from the things that have been provided to them by the stages one and two. So it's really democratic in the sense of there is a separation of power. The users have to go through that transformation process themselves. That's also democratic. Nobody can come and say, I want to be at stage four directly. And it's decentralized, meaning that the crowd decides and the people decides. And it's not something where a central instance decides what is good or bad, transformational or not. And then there's the equality of opportunity, which is also very important. Everyone can be a creator. Everyone can publish to the platform. And if they provide a story, if they have something to share, that is really of value for other people, then it doesn't matter where they are or who they are, if they live in a so-called third-world country. Because I truly believe that today the most important stories have not been heard yet. Because in the paywall-based model, well, many people don't have the opportunity to publish such a newspaper, for example, such a publication. In the advertising-based model, the important stories are really drowning or being drowned out in the noise of everyone else. There's really no quality filter in there. So I truly believe that this could be a system that leads to also those voices being heard. So it's a democratic system in every sense. So, in this sense, it means that... Well, let's go to the first stage here. That's what we're working on right now. So that means that you can go in this moment to butterfly.me. Again, butterfly without the L. Butterfly.me is the website where you... If you open it, you see a screenshot here, you see the stage one, actually. So you get the question, what are the most life-changing things you've ever read on the web? So, in this sense, think hard and try to provide other people with value. If you have something that has had this kind of butterfly effect on your life, share that with the platform, and if people democratically also decide that they see value in that, you will be promoted to the next stage. And very important, this is a project that is non-profit. As you have seen, the system is self-sustaining. So, you don't need any third-party investment or something. So, that means it's a community project. It should be as democratically as possible. So, that's why we're really looking for people, maybe people like you, who can help us with that, who want to be a part of that, because the more people are a part of it, the more democratic it actually becomes. With that, I think I will give back the word, and if you have any questions, or also have any feedback, please also feel free to reach us, either through our website, butterfly.me, or at support at butterfly.me, which is our email address. Please feel free to reach out to us, and now I'm available for, if you have any questions. Thank you for your interesting talk, and this kind of invention you just presented to us. I'm pleased to hear. Maybe for all of you again, the hints that you can ask questions directly. Now to Berlin by using our chat channel. I will try to ask that for you today. Let me start with my question first. You're saying that there are many stages, and then my question would be, how would you, how would the common pass on be a majority vote, so dass man einen User mit ein Artikel oder ein Newsbyte oder ein Video oder so, was man in der Campus-Szene ist? Es ist eine Entscheidung von jedem, aber dann planen wir, dass wir etwas introduzieren, dass wir randomes zu preventieren. Das bedeutet, dass es nicht so ist, dass es nicht so ist, dass die Leute nicht so gerne das Content verursachen. Wir haben etwas Randomness, das bedeutet, dass das Content nicht alle 100 Menschen direkt, sondern nur 240 Menschen verursachen. Wenn die majority von den 40 randomen Menschen, die Content über die nächste Stelle gehen, ist es dann eigentlich auf die nächste Stelle gefordert. Wir wollen damit preventieren, dass die majority immer die Rolle hat. Es gibt einen bestimmten Element von Randomness. Das muss man dann auch 2x verursachen. Ja, weil es zwei Stades ist. Es ist immer eine Valtenteilung, aber vielleicht sind es die 40 Menschen, die sich random die Content verursachen, dass die majority der Menschen, die die nächste Stelle gehen, die Content über die nächste Stelle gehen. Es ist eigentlich nur eine Valtenteilung, die die Menschen in dieser Stelle haben. Jetzt eine Frage aus der Web, die wir bekommen. Die ersten Personen sind kritisch, dass du noch die Web in Stages 3 und 4 fragmentierst. Die Information ist nicht fragmentiert, weil bei Stages 1 und 2, wo alles für alle accessible ist, die Informationen für alle accessible sind. Es ist nicht fragmentiert, dass alle Stages 1 und 2, die alle Content-Pieße sind, freiwillig für alle available sind. Aber dann werden sie in den Sinn, dass die Leute die die Kräuter auf die Web haben, die die meisten Transformations-Content sind. Die Leute in Stages 3 und 4, die vielleicht nicht durch alle Stimmen wollen, die nicht alle die Informationen haben, wollen sie nur, was für sie wichtig ist. Vielleicht haben sie nicht viel Zeit, oder werden sie sich für das Geld in diesem Sinne verwendet. Aber das ist nicht fragmentiert, weil alles freiwillig für alle available ist. Wenn siealog cielo, dann世en im Wasser. Verwag geliyor mit den käytlichen demographics. Das sind die Themen, die ich meine. Du pääst nicht am Bean. Es ist eines der gesetzlichen Aufgaben. Es geht nicht nachigten, dass man gleiche Julian stressen will, weil wir immer zur Java Quin费 betreiben. Aber wenn du die Dinge wählst, die wirklich vertreten, die für andere Menschen auch wertvoll sind, dann wird du finanziell gewordert. Und das andere, wie die Unterrichtung, die du hast, wenn du auf die freien Stagen bist, ist, dass du ein bisschen mehr Ruhe hast. Aber das ist auch der Fall in den Advertisen-based Modellen. Auch besser, ich würde sagen, weil auf dieser Plattform sind es nur Leute, die das Align-Interest in den Verhandlungen finden, die Transformationen, die Dinge, die nicht polarisieren, nicht diese Klickbait-Contenten. Ich denke, du wirst definitiv viel Aufsicht haben, sogar in den freien Stagen, wo du nichts payst. Ich bin nicht sicher, ob ich die Frage 100% habe, aber vielleicht bist du auch. Die Frage ist, ob die Informationen schon jetzt sind. Aber die Menschen's Gefühle sind das Problem, also du hast die Vorstellung, dass du die falsche Posten hast. Ist es möglich, dass unser System die die meisten emotionalen Informationen verwendet? Oh, okay, das ist so die klassische Frage. Was kommt zuerst? Haben die Menschen sensationale Content? Oder sind sie so gut geäußert, dass wir nichts anderes wissen? Ich bin sehr deutlich überzeugt, dass uns niemanden eigentlich den dritten und frischeren Content im System genutzt. Einiges Content kann entertainieren, und das ist wirklich kein Problem. In diesem System, niemand sagt, was gut oder schlecht ist. Man kann auch ein Geldentwurzeln oder Content, das nicht immer rational oder etwas sein muss, aber es ist um die Verwendung der Wert. Ich bin sicher, dass du alle die Artikel hast, wo du dir gedacht hast, ich habe mein Zeit gewaschen. Das ist was wir versuchen, das System zu erzielen. Ich glaube, dass ich mich zu viel über die gute Seite der Menschen verabschieden habe, aber ich denke nicht, dass wir die Messung in diesem System genutzt haben. Alle, die ich mit dem Thema sprechen habe, haben ein similarlyes Problem, dass sie sich überwältigt fühlen. Es ist wirklich ein Rennstück, das wir in der Advertisement-Basenmodelle sind. Ich denke, es ist wirklich Zeit für etwas anderes, und Zeit für etwas, das die Interesse der Transformation und der Gesellschaft in der Welt hat. Das wäre meine Antwort auf diese Frage. Wir haben drei Fragen von zwei Menschen, aber sie sind zu mir sehr stark, und ich habe sie alle zusammengelegt. Die Frage ist, wie du dich aus dem System zu erzielen, und dann, wie du dich auf die Initialität nicht beizutrücken kannst, weil du nur die Menschen, die von den Menschen, die RHD3 oder die Sivine sind, aus dem du einen Gehirn-Rohler-Sens hast. Das ist wahrscheinlich das gleiche Problem, als die nächste Frage, die ein bisschen mehr evi-situiert ist. Wie du die Systeme zu verabschieden, die von Bots kontrolliert wird. Wenn du in einer Liste alle beizutrückten Menschen oder beizutrückten Bots hast, dann ist das ein bisschen mehr evi-situiert. Die erste Frage ist, ob wir irgendwo starten müssen. Ich denke, dass ich in dem richtigen Ort bin, um alle von euch zu erzählen. Das ist das Idee, weil ich denke, es ist wichtig, wer auf der Plattform ist. Das ist wichtig, dass du in dem Beginn open-minded bist, dass du die Qualität des Contentes schaust. Allei beim System ist das DurchausBoydonner in reus. Zum Beispiel Sony-Usertransformation. Vielleichtbecca da, das nächste Frage, aber viele Fe indoors pourra für diesen Diesen Wandel be coating. Nicht jeder louder. Und ob den einen Besuch dazu verlements announced. spionärisch und kreativ.敢ly Vacantio mehr Leute kommen, die mehr demokratisch es wird, und desto weniger ist es wichtig, wer auf der Plattform ist. Und dann die Frage mit den Bots. Der System hat diesen Schaden, wo der System schaut, wenn du ein Kreator bist, die Content, die du darstellt, oder wenn du ein Filter bist, die Content, die du filterst, oder wie weit es geht, wie viel Content du sendest, wie es dann sendet, und wenn du ein Bot bist, und das System detectst, dass dieser Schaden wirklich low ist, wie alle Leute in der nächsten Stelle, die dein Content auf der anderen Seite betreffen, zu sehr, sehr schlecht zu sein, und nicht zu der nächsten Stelle zu gehen, dann wird das System warnen, und vielleicht blocken dich für einen solchen Moment, sodass du nicht mehr Content in die Plattform verabschiedest, um Spamming zu veröffentlichen. Und dann auch, da ist eine Rennfunktion, wo, wenn du dir das Content, das Spamming ist, oder du dir das, dass ein Bot, das Plattform erbracht hat, du kannst auch das Rennsporn, aber es ist nicht zu einem zentralen Instanz, sondern es chooses randomly a certain amount of people, who are shown the content, and then it's a democratic decision of this random set of people, if the content stays on the platform or not. So it's kind of a democratic, decentralized report function as well. So, that's one aspect, and then also we have the aspect of time. I haven't talked about that right now, because it's actually important, but it's like a technical detail, that we have a time lag in the system. So, each decision you take is recorded on the system, but not immediately implemented in the system, meaning that Butterfly will ask you, after a few days, will show you again the content, and ask you, hey, do you remember, you have wanted to send that content to the next stage, do you even still remember that article or video, whatever it was, if not, it's probably not been transformational, if yes, do you really still want to send it to the next stage. So, this time lag also prevents kind of bots from immediately doing something, or it enables, or it enhances, that increases the chances that people, that we all really take a step back and ask ourselves, is it really something that has had now over those days, over this time an impact on my life, or is it just something that has made me in that moment, feel good, or I wanted to reward it in that moment, but today I don't think it's the case anymore, and that time lag would also increase, in my opinion, a highly increased the chances that the content that goes to the next stage is really, people have thought about it, and really carefully chosen to send it to the next stage. So, speaking of time, there is another question related to it, which sees this probably a bit of the week, so the question is sometimes, if you want to see it crucial, let us consider it a security vulnerability publication, so passing all four stages takes time, and you might miss the news, if you only subscribe to stage four. How do you deal with that? Yes, that's definitely, I mean, that's a disadvantage of every democracy, that's why it takes time, decision making takes time in every democracy, is something that you, I don't think, you can prevent it with that system, it just takes time, and I think it's actually a feature that it takes time, because it's more, I think, probably it would not be the right platform to publish, or at least not in that form that I presented now, it wouldn't be the right platform to publish a security vulnerability straight away, when maybe you could alert the public more easily through today's social media or something like that, but I also believe that if the security vulnerability that you would share with the world, that people would actually see that, hey, there's a value in that, we have to make that heard, and then it would also go through the stages, it still would take a little bit of time, but I'm really convinced that it would go through the stages, and of course, sometimes time is critical, and then maybe that would not be the best use case, but I think that this feature, or that this aspect that it takes time, is actually really a feature, because it makes you take a step back and actually really reflect on your choice. All right, you're already over time, but there is one more question, which is stated twice once on Twitter, and once in the, I don't know, channel here, channel here, so a really cool idea, yeah? Then I do not understand why do you want everybody to start on stage one, so that way you are cutting your target group and get less user or customer, and the person in Twitter has similar concern, so if people who are willing to pay, basically have a lot of money, they might be valued their time higher, and they are not willing to pay more time to go through stage one, too, but they would be willing to pay for already moderated content in the stream. All right, so I think it's actually very important that everyone has to go through the transformation model, because it aligns your interest, only the people who take that time and go through all those stages prove that they are interested in that high overarching goal that the system has. Only if you prove that you are interested in that, you're allowed to proceed in the next stages. That's really essential to that democratic function of the system, and I compared a bit to Wikipedia, where also you cannot just immediately edit something and it will be edited. You have to take the time, it has to go through different stages, and I think it's actually, that's very important, because it also doesn't tie money to status or something, it doesn't money does not influence that decision-making process, and I think it just aligns the interests of everyone and assures the platform or the system to... It helps it to know that the people who go through the stages are really invested in the idea. And I think even if you have money now and would like to use such a system, but don't want to go through the stages, I think you would have to ask yourselves, why not? Just do it, provide value, and you will get to the next stages, and it allows the system to stay democratic and not to give people who have money an advantage over everyone else. So, a link to this, maybe you can answer it yesterday, and it's a question, is it also possible to choose to stay on stage one or two? Yes, definitely, that's something I haven't mentioned, but definitely it's always your own choice, if you want to spend money or not. So you can definitely also stay at the stages one and two. You will be alerted, that you could go to stage three if you want to, but if you don't want to, obviously you don't have to. All right, thank you very much for this talk. And also all the answers to the question. So we take a little break here, and we are back in about 25 minutes. This is a talk about net sporting things in the future.
Today's (social) media is broken. We are building a democratic, decentralized alternative. To do that, we are fundamentally reinventing the business model behind a media platform. The goal is to end privacy invasions, advertisement, paywalls, and unaccountable media corporations - and to help everyone to focus on transforming their lives and society at large. **What?** Social media originally promised the democratization of media. In today's polarized world of disinformation we know it did not work out. The main reason for that is the business model of advertisement. In this model, our attention and behaviour are modifiable commodities, sold to the highest bidders. As we have seen, today's centralized platforms and their ad-based model are misused heavily to damage democracy, incite hate and fuel conflict. We are being driven into polarization, losing our common ground. So, it is past time we end this race to the bottom before it is too late. **How?** The existing alternatives -e.g. paywalls- don't work. Primarily, because they contradict the open knowledge model of the Internet, they fragment the web, are non-inclusive and intransparent. We need to think outside the box by fundamentally rearranging the incentive model and building an open, free and democratic alternative. That is what we are working on. We are building a media-crowdfiltering-system with different stages. People can access all the content for free at the earlier stages. As a small service in return, they filter the noise for the later stages by curating the content. The people at the later stages pay a small amount for the priviledge of seeing only the highest-quality content and not having to go through all the noise themselves. There is one big catch: The people at the later stages don't pay the small amount to the platform but in fact to the **individual pieces of content that changed their life**. Their money goes to the **creator of this piece of content** and **everyone who signalled it for them at the earlier stages**. At the same time it acts as a vote, further filtering out less good content. This reward concept makes it a self-sustaining system with no need for outside investment. And at the same time leads to crowdfiltering towards truly life-changing content. The incentives are changed so that everyone involved has the quality of content in mind, and not the click rate. This is only a brief description. **The talk will explain our approach in more detail**.
10.5446/51699 (DOI)
Good morning. It's extremely bright here, so I can really see you unless I do this. It's kind of annoying. I'd like to be able to see the faces of the audience. But, oh well, I guess that's what it's like. So welcome to the talk. I want to go back in time a bit. About three months ago, I was working late. And I was unintentionally working late. I've been caught up in some tasks I wanted to finish. And I realized I was late because I got a notification on my iPhone from the calendar app, and I said that you're now entering the block of time, which I call family time, which means I'm supposed to be home with my family. And I'm not necessarily the person who has to structure every minute of my time schedule, so that this is family, this is fun. It's not like that. But I work in a distributed company. I work at Samarin, who's now part of Microsoft. So there's over 100,000 people. And I don't necessarily know that I'm in a different time zone than they are. So I plug off this time slot in my calendar to make sure people don't book meetings when I want to be with my family. So I was getting this notification, and I'm realizing, oh, I'm supposed to be home now, so I better hurry up. So I kind of run off. We use Slack, the Slack app for chatting and work. So I send off those last messages, finishing the conversations on Slack, run out the door, switch over to the mail application on the iPhone, and send off the last email and try to hurry to get home as soon as I can. And I use public transportation to get to work and get back, because I really, I hate traffic. I hate being stuck in queues. So I like the fact that you can kind of get in a bus or on a train, and you can sit down, do some work, or, you know, read a book or relax or whatever. So I was trying to get home, so I switched over to an application we have in Denmark called Reiseplein, which is basically it's a travel planning app for public transportation. I'm sure you have something similar here. And it's actually a really nice app, so it will tell me, so how do I get from where I am right now to where I want to go, in this case, home. And it knows, like, the local transport system, so it's slightly better than, like, Google Maps. And I enter that I want to get home from here, and I realize this is going to be really late. Like, I'm going to be very late. So I go to the message app, and I send a message to my wife, like, sorry, I'm really late. I miss you and the children. I'm so sorry. But I try to get home fast. And so I keep walking a bit, and I think, you know what, I'm just going to take a taxi. I really want to get home today. And, you know, taking a taxi in Denmark is like here. It's not something you automatically do. It's, like, ridiculously expensive. So it's not, like, a default thing. But I decided I wanted to do it. So I switched over to the Google Maps application to check my location, because I've been walking for a bit, and I wanted to call them and tell them where I am. And then just when I was about to call, I realized that last time I called the taxi company, I got into this queue, and they said, why don't you try our new mobile application? It's a really easy way to book, and I thought, yeah, that's way better than going into a queue. So I jumped over to the app store, searched for the application, downloaded on the fly, got it on my device, and I opened it, and I was, like, a bit puzzled. Like, the UI was a bit weird. But I kind of figured out how to go into the booking screen. And I tapped the booking button, and I got this. Boom. Crashing home screen. And I was really annoyed. Like, I was late, I was trying to get home, I was trying to book a taxi. You know, it was a very basic thing I was trying to do. I just got this experience, crash. And so what do you think I did? What did I do? Anyone? What? Try it again. Try it again? No. You deleted it. Exactly. That's exactly what I did. I stopped using the application, I deleted it. I went to the app store because I was pretty annoyed at this time and tried to get home. I told them, in a public rating, one star, please fix your stupid application. It doesn't even allow me to book a taxi, which is the most basic operation of this application. And the UI sucks. You know, I was upset at the time. I don't usually kind of leave aggressive app store reviews. But on the other hand, do you think that's unrealistic? Do you think that's, I mean, this is a true story. This actually happened. Is that normal behavior? Yeah. I mean, if you think about it, think about what I've been doing just prior to getting this experience, right? I was at work, I got a notification, sent out a message on Slack, switched over to email, walked a bit, switched over to the travel planning app in Denmark, which is quite good. Realized I was going to be late, sent a message to my wife, switched over to Google Maps, an awesome application. Got my location. I was just about to use the phone app to call the taxi when I decided to switch to App Store app, downloaded the app on the fly. Everything smoothed so far. And then I got this. This experience. And this is how I felt. It was a burn this thing. So my point with this whole story and kind of this message here is that really users have extremely high expectations for the quality of their mobile experiences. If you look at every one of those apps, they have beautiful UIs. They have really responsive and fast. They have great user interaction design. They publish updates to these applications in response to App Store feedback multiple times per month. And they just have a ton of useful features, right? And that application, whoops, I'm going to have to scan through these animations. That application that you're building or that this taxi company is sitting right there in between all these world-class apps like, you know, Snapchat and Instagram and Facebook. Your app is right there in the middle and it's definitely going to be compared against the world's best apps. So I'm trying to get the point over that the expectations are really high. When you screw up, it's visibly publicly visible in the App Store via the rating system and users are pretty tough. They'll do this. At the same time, while there's like a high bar, there's also some unique challenges. Quality of mobile is just really hard. Because if you think about it for a moment, there are platforms like iOS, Android, Windows. And there are vendors, so device vendors. So how many devices are out there? Well, there are multiple vendors. Each of the vendors have multiple models. The models feature different hardware like CPU memory, screen sizes, resolutions. And each of those run different versions of the operating systems. So there's just so many combinations. Trying to test on all of that can be pretty hard. So one way to kind of constrain this problem, to make it easier, is just to say, you know, I'm just going to limit myself. I'll only support, you know, Samsung, the newest version of the Samsung phone and maybe the newest iPhone or whatever. So if you were to take this approach and you wanted to say limit yourself to, you know, the 75% most popular models out there, how many devices do you think you would need to test on? To get that market share coverage of the 75 most popular ones? How many? Anyone? No idea? 300. 300. It's actually less. 75%. Wow. But you're getting there. You may actually be right because this data I have is actually probably three years old now. So it could be worse. So to get, this is US data, by the way. To get 75% of the market share of the models out there, you would need to test on 134, at least three years ago. Actually, there have been even more fragmentation in the space since then. So you could be right. It could be 300 now. And you have this exponential thing, right, where if you want to push that 75% even higher in terms of coverage, you get this exponential growth, a number of configurations of devices. So this is just really tough. So if you're a QA manager, what's your strategy going to be? You're responsible for delivery of a mobile application. Are you going to test everything on all devices? Are you going to test all the features of all the applications? Are you going to buy all these devices? Are you going to hire someone like this extremely efficient girl? I've never seen anything like this before this picture. Excuse me, is that the picture from the real testing facility? No. No, we don't have people. But this is an example of what people would do, right? They will outsource the testing, basically, to fairly cheap labor. And they are specialized in very efficient multi-device testing. I just really love this picture. But then again, you can think about, like, if you're testing on, what is it, like, 50 devices, something like that, what's the chance that you're going to, you're moving really fast, that you're going to make a mistake? It may actually be pretty big. All right, so I think I'm getting my point across. So this is how you should feel, like, ah, the scream, right? So if you're not familiar with this picture, it's actually here in Oslo. It's Edvard Munch, and he actually gave a German title to this picture. Do you know what it is? The German title? No, it's Der Schreid der Entwicklung mobiler Apps. So translated to English, it's the scream of mobile app development. And you can go look that up if you don't believe me, which you probably don't. This is how people feel, like, really anxious. So now what? What do you do? Well, I'm going to argue that one approach you can take is actually to focus on automation. So if you're able to automate your tests, and as we'll see later to deploy them to a range of different devices, then you can speed up your cycle, and you can ensure higher quality, and you can test on a lot of devices very easily without going through extreme measures. So I want to talk specifically about automated UI testing. Now, I know that there is more, definitely more to life than automated UI testing, and I know about the concept of the testing pyramid, so which says that you should have more unit tests than you have integration tests, and you should have more integration tests than you have automated UI tests. But I would actually argue if you were starting from scratch, and it turns out that about actually also 75% of mobile app development projects, they're starting from scratch, like they have no automation at all, they have no CI. Now, if you're starting from scratch, that's 75% of you guys here, and you want to get value, and you can now only write one test. Well, I would argue that if you're just able to write one test, you're getting a lot more value out of writing an end-to-end automated UI test or a smoke test, rather than writing one unit test and running that. I think that testing pyramid story only applies once you have a decent test coverage, and then it tells you something about the ratio between unit, integration, and UI tests. Anyway, just in case someone's not familiar with the concept, so automated UI testing, the idea is to have a program, a test, which simulates what the user does to an application. And what does the user do? Well, he or she interacts with UI controls, right? Like tapping, scrolling, swiping, entering text. That's what a user does to the application. Now, you're going to simulate that in a program. So in order to do that, you need to be able to talk about gestures, like tapping and swiping, and you need to be able to talk about views. So the way we do that, at least in our tool, is that we have what we call a query language, where you can specify, you know, it's this exact button that I want to touch, or it's this exact text I want to find. And there's kind of a nice little DSL declarative language there to let you easily do that. So some examples here would be, so this isn't C sharp in this case. So application, please tap anything that has the text, the string help. Whether it's a button or it's a text, doesn't really matter. Just find something with the text help. Or application, please tap the element E, which has the ID, technical ID, history, BTN, history button. Or application, please wait for element E, which has the text ink. So this is the kind of language you would use in a program like this to tell the program what to do to the application, like tapping, waiting for events to occur, like wait until there's no spinner visible on the screen. In addition to these various gestures, you can also generate screenshots. Like I want to see what the application looks like just now, save that to a file. And you can manage the app lifecycle. So launching, stopping the application, clearing the application data. So you're starting from scratch. That's often something you need to do during a testing cycle. Like completely uninstall it, install it again and start from scratch. And then some tools like ours allow us to actually do some things that are pretty hard to do as a human. So one example would be to simulate the GPS location of the device. So writing a test that says, given I'm in Oslo Airport and I tap the book button or whatever. And then the next step you say, now set the location to London. So you're pretending that you're flying from Oslo to London. And you can do that now within milliseconds. And another thing you can do is actually like very low level is basically grab an object inside the application and start calling methods on it. So it's a pretty advanced thing, not necessarily something you need to do. But it can be extremely handy to say to speed up the test or set the application into a specific state. All right. So that's a basic overview of automated UI testing. So simulate a user using the application. Now there are a ton of different tools and frameworks out there that you can use to run these and write these UI tests. So the example you saw before is from Samarin UI test, which is one of our own products, which is basically a C sharp based API. It's cross platform on iOS and Android. So you can write tests that work on Android and iOS applications. Basically you can run them. You can write these things inside any unit tests and run them from inside Samarin Studio, Visual Studio or from the command line if you want. There's also support for a spec flow if you're familiar with that. There's a system called Calabash, which is very similar, except that you are writing your tests now in Ruby and you're running them using a tool called QComber, which is like a behavior driven development tool. So that's kind of matching what you do in your company, then maybe you want to go down that route. Then there's a different, slightly newer option out there, which is called Appium. And the idea with Appium is, you know what, this UI testing stuff, we already did it. It's Selenium, right? It's Selenium WebDriver, if you're familiar with that. So basically the idea there is that a lot of companies have people who've been writing Selenium Web browser scripts. And why don't we see if we can use the same API to test the mobile application so that we can use those people and I don't have to kind of train them from scratch. And the advantage there is that, you know, there are always already libraries out there for Java and Python, JavaScript, whatever language you want, that you can kind of use already. It's already out there. The downside, I think, is that you're getting a bit of an impedance mismatch, because a browser is not the same as mobile native application. For instance, there's no URL and you don't click stuff, right? You swipe or you perform complex gestures. So they add some stuff on top of it. Finally, there's like the platform-specific tools, Espresso UI Automator on Android, and XEUI Test is the newest one from Apple. And those are like the official tools. They're there. They come with the platform SDKs. And they use the language of the platforms, which in this case is Java and Objective-C or Swift. But they're not cross-platform. So if you write a test for Android, it's not going to be able to run on iOS. So that's the disadvantage there. But all of these are kind of options you can look into. Now, once you have your automated UI test and you have your application, you can then go and run those. If you have, you've got them by 10 devices, plug them into your machines, and start running those tests on the devices. So already now you're starting to get some value. But of course, are you going to go out there and buy, you know, thousands or hundreds or 300 different models and install the various OS versions? Well, you may want to. You may also want to look at options there. So this is where kind of the product I work on come into play. So that's called Seren Test Cloud. And the idea is to solve that problem for you. So we buy the devices. We host the devices. We fully automate them. We don't jailbreak them. These are like real devices as the users would go on, buy it in the shop. They're not kind of modded in any way. And you can test any application, whether it's written in Samarin or, you know, Objective-C, Java, even the Hyperdabs like Cordova and so on. So we basically provide a test execution infrastructure. So you basically just give us just the application. Here's the tests I want to run. These are the devices I want to run on. We take care of everything. That's how this works. So, yeah, there's a ton of different devices that you could go and either buy or kind of go to the cloud for hosting. This list just goes on. I just like the scrolling. So I put it in here. Just to get a sense of what this actually looks like. All right. You can also think, you know what? I'm just going to build this myself. How hard can it be? And I just want to say right now, don't, please don't do it. You may think it's like a month worth of a project, but it will turn out to take four years. At least it's taken us four years. And you end up having to deal with things like batteries like this that start inflating after some time. You don't want your labs catching fire. You also have to deal with OS upgrades, pub-ups, unstable devices, because these are just consumer hardware. You have to build out a parallel execution infrastructure. You have to figure out how to do reporting and screenshots and videos and like so it's just a ton of work. Just I've seen some people go out and do this. So I just want to put this out there. All right. So now let's do some demo. I guess you want to see stuff. So this is actually a sample that you can go and download yourself. And I think I was just looking at the program right now. I think there's a talk right now that's using this same sample here. But I guess you can see that online if you're interested after the conference. But basically it's called MyDriving. And it's like an IoT application, an Internet of Things application. That's combining SamRen to build the application and Azure IoT and cloud services to do a bunch of analytics on top of those data. And what is that data? Well, the idea with the application is you want to track your driving basically and compare how you drive against how other people drive. And maybe just against your own driving to improve. So basically what you can do is you buy this IoT device. So it's a small device that you can plug into your car and then it interfaces with almost any car out there to gather the data that's being shown in your dashboard. So things like mileage, how fast you're going, what's the engine load like. So this kind of sucks that out of your car and then sends it out either via Bluetooth or Wi-Fi. It's like a cheap device and it will plug into most cars. So the idea is you plug this into your car, then you have your mobile device and that IoT device is transmitting data to this MyDriving mobile application. And that then forwards the data into the Azure IoT Hub. And then there's a ton of stuff that happens that I don't want to talk about, which is probably the content of that other talk I was referring to. But it's basically like big data analysis, Power BI, machine learning on top of this data to basically compare your driving against your previous results and against other people out there. But I'm not going to talk about that bit. I'll talk about the mobile app development story and the testing story for this application. But it's actually interesting in its own right. And you can go download everything, including the source code to the mobile app, using Samarin, which is now free for everyone. And the backend infrastructure code for Azure and all the tests. So everything is out there if you want to look at it afterwards. All right. So, whoops, I was going to actually jump out here. So, let's jump here into Samarin Studio. And I'm just going to launch the application here. So it's going to compile right now for iOS and launch it on the simulator so you can see what the application looks like. So let's see. Here's the iOS sim. So the idea here is you can log in with Facebook or whatever. And this test build here, we can just skip the auth just to get into demo mode. Now, it's because this is the simulator, I'm actually in the middle of the sea somewhere. So I can set the location here to, say, the Apple headquarters here. And I can actually set the simulator to do like a freeway drive. So now it's actually moving. And then we can hit this record button. And it's now saying that there is no connection to one of these devices, but they built like a simulator so we can pretend that there is a device like this in software. And you see now we are moving around and it's gathering data here from engine load and duration and distance and so on. So let's say, okay, I'm done driving here. We can then save the trip. And we can get a summary here. We drew 0.25 kilometers. It took 13 seconds and we're driving 50 kilometers per hour. Then you can maybe we stop this driving thing here. Go back to Apple. Then we can review our pass trips here. For instance, James Montemagno was driving around Seattle and we can look at the path he took here. And we can kind of drag this here to simulate his driving around Seattle and look at, at this point, he was driving 38 kilometers per hour. And then you can go into like a profile where you get to score like I'm an amazing, this is Scott Guthrie. He's apparently an amazing driver in the sample here. And you can look at some metrics here. And there are settings so you can switch for instance, if you don't like this stupid imperial system with gallons, you can switch into the reasonable metric system instead. And then these, here, these skills and metrics here update and so on. So that's the basic application. So what if we were to write an automated UI test for this? How would we go about doing that? We have a fairly new product that we call the Samaritan Test Recorder. And that's like an easy way to get started because you don't like how do I even get started with this stuff? So it's like a standard, standard application here where you can say, let's say I want to run on Android here and I want to run this my driving application. Now it's actually going to install the application, prepare for testing and launch it. So it should happen here. It just takes a bit to install. And then it connects to the application to start the test recording stuff. Right. So now we're connected. So now I can hit the record button here and then go to the simulator or device I plug in. And then I say, like, I want to skip off and you see it's registering a tap gesture here. And then I go into this menu and say I want to go to settings here. And I want to scroll down and I want to check that there's a leaf feedback text down here. So I can click here on this. I don't know if you can see it here. Like there's small crosshairs and that's like assertion mode, which means now I want to make an assertion. So I click this and then I click the element I want to assert is present here. And let's say that's that's the test I wanted to run. That's a quick smoke test took me a few seconds to generate. So I hit stop here. And now let's just verify that the test actually works that it does what we wanted to do. So we can hit run here. So it's going to restart the application going to run from scratch and then replay the things with it. Like tap off, tap the menu, hit settings, scroll down, check that leaf feedback is present. So that's actually quite nice, right? It's a really fast way to generate a script. And now if you wanted to, you could actually go right now just with this work and run this on, say, 10 devices, different Android versions with your application, just right from here, since test cloud. But what I actually want to do is I want to look at the output here because it knows kind of under the hood what it needs to do in terms of C sharp code to actually make this work. So what I can do here is say export copy. And then I can go into my ID here and I've created just a new C sharp file with some basic plumbing, which is let's use Samurai UI test, let's use any unit framework. Tap this class and this here, this attribute means that I want to run this test on Android, on the Android platform. And then we have this line here, which is some boilerplate code just to launch the application. So that's the basic plumbing that you need to get started. Now I just paste in the stuff from the test recorder here. You see that it has some like tap this particular ID that it detected. A screenshot is to see screenshots in test cloud and all the stuff you just saw. So let's now compile the UI tests and run this example test here on the Android platform. So this is running from within the ID. So it has to reinstall and clear the data and then launch application. So we're tipping skip off. It's a setting scrolling down. We're good. So quite nice. Now we actually have our first test and we can go and run it on any device. If we are able to run a CI system, we can plug it into CI and so on with a few minutes of work. I actually forgot to show you something, which is since this application is written in Samarin, I can go and change some basic things here like the color of the bar at the top of the screen. You'll see what I mean in a second. You see it's blue now. If I wanted to change it black, that's just the line of C sharp code I can change. Sorry, I wanted to do this in the beginning. So if we look at the sim here, it's launching and we go to skip off. And you see now this color is black here. So I want to commit that change to my Git repo here. Let's forget about the tests, but let's add this commit here. Black bar. Okay, so I changed my application background to black. All right, so just a moment, I want to just check here. Right, so I forgot one thing here when I was doing this demo, which is we have our test now. Suppose we wanted to run this in test cloud, we can also do that from inside here. So if I right click here and I run, sorry, from here, I could do... What's going on? There we go, sorry, I had to click the very top element. Run in test cloud from inside the ID, pick out the application APK file here. Upload and run. And then you can see it from down there, but it's kind of submitting. It's compiling the application and the application binary and the tests and get the DLLs out of that compilation step and uploading everything to test cloud. And what then happens is I'm going to pick now a team that I want to run this test in in test cloud. And what I have here is now a selection of devices. So which devices do I want to run this test on that I just wrote? I'm just going to sort by availability, which is how many devices do we have of each of these. I'm just going to pick an Android 4 device and an Android 5 device of various types, just for the demo purpose. Then I do done and that's going to kind of finalize the upload and go to an application test overview screen. You can see I've been running some tests previously here. So in fact, let's see this test here. That's the new test that you saw before that I ran before going up on stage here. So new test, it taps the login button. So going to this screen here, taps the sidebar menu, goes on to the settings screen, scrolls down and asserts that the feedback button is there. So that's now run on real devices in the cloud and you can see that the test passed and you can go and view the screenshots here. You can go and download the test log and the device log so you have information if there's a crash. So I also did that running on, let's see, eight devices here. And there was an additional feature I wanted to show you. For instance, if you remember the slider thing we had, so if we scroll up here, maybe just pick a specific device, remember that we could go to this screen here, like past trips, and then we pick a specific trip here. And then we actually have a video, if we scroll to here, a video of what happens when you tap the slide at various points. So if you tap the slide at a specific point, the car will kind of drive around on the map. And there's a video recording available at that so that if there's an animation you want to see, you can kind of capture that in a video instead of a screenshot where it can be hard to actually see. We also tracked the performance data here, like how much memory are we consuming. In this case, we're not actually consuming any CPU because we're sampling at a fairly low rate. So at least you know that the application is doing pretty well then. It's not like spiking CPU usage. All right, I think that's what I wanted to show. Oh, yeah, except for this, of course, also runs on iOS. So I've run this on six different iOS devices spanning four different iOS versions, like iOS 9.3, 9.2, 9.1, 9.0, which is what this application supports. And those tests were all executed in parallel, kind of the full suite testing the application there. Good. Yeah, so that was the basic demo here now. As I said, let's kind of just get rid of this thing here. This is a real application with a real test suite that runs on multiple devices. So as you may be realized before is that the application is actually different on iOS and Android. Oops, here it is. So for instance, the way you navigate to settings here, like on Android, you have to tap this hamburger thing here to get into the settings screen and then go here. Whereas on iOS, the navigation pattern is slightly different. Let's kind of get this launched. Here you have to go into profile and then tap this thing and then you get into settings. So because this is not like they're trying to build an iOS app that feels iOS like on iOS and something that feels Android like on Android. So the applications are actually slightly different, which is a challenge for writing cross-platform tests. So we can actually help the developer by putting in the same IDs on iOS and Android for the same types of buttons because then you can use one line of code to tap the same button across iOS and Android. But in practice, most applications aren't actually built like this. So there I recommend that you use something called the page object pattern, which is actually a very simple idea. It is to abstract the logic of the test into classes that we call page objects. For instance, you may have an object that represents the login page. Let's see if we can find that. So here, login page. On the login page, I might login via Facebook. I might skip authentication. Those become methods. And then you push the application, the iOS versus Android specific logic into those methods, which enables you to reuse the high-level logic of the test script. So that's how you structure your test if you want cross-platform testing. For instance, here on the login page, I can skip auth. In this case, it's actually cross-platform because you can just do add.tap the text skip auth, which is the same on iOS and Android. But if you see, if we wanted to login via Facebook, we actually have this kind of interesting construct here, which is on Android, the selector, the ID for the login via Facebook button is button Facebook. But on iOS, it's login with Facebook. So this means we actually have a bit of branching, but at least that branching is pushed into this abstraction that we call the login page. So if you're going to build these applications trying to be mindful of the tester, so put in the same IDs so you minimize the branching. And if you're a tester, make sure you abstract away the differences using this page-upject pattern. That's the recommended practice there. All right. Let me see what else I wanted to cover. Oh, yeah. Just to show you kind of some fun stuff, this is actually, this test, as I said, was kind of a real test for this real application. So we can actually run it on both platforms. So here's the settings test, which changes the metric from, sorry, the unit from the US imperial system to the metric system. So let's run that on iOS here. And as that is running, because the IDE can't do two test runs at the same time, we can actually kick off the Android test from the command line. So we can raise iOS against Android. Let's kick that off here. So we're running the same test now on both platforms. So which platform will win? Who's the fastest? Anyone? No? No guesses? You can already see it, right? It used to be that the iOS simulator was a lot faster, but with some recent updates, kind of keeping it stable and launching it to actually take some time. So now iOS is actually slower. But at least you can see what I'm talking about here in practice, that we're actually running the same code, push the branching down into just page objects, but testing on iOS and Android. And those were the tests that also ran in the cloud on real devices a minute ago. All right. I think that's it for demo now. Have you seen this? Yep. And we've seen this. So I wanted to... Actually, let's just let this run done. It doesn't run in the background in the meantime. You see, this launch step is also a bit slow. All right. Never mind. Let's kill this. So I wanted to say a few words about kind of going... or which direction you want to go in if you start doing this stuff. So what's next? So for me, the ultimate goal, like the end state that you want to get to, is basically to implement a continuous delivery process in your mobile applications. I don't know if you're familiar with that, but here's like the canonical definition of what that is, what is continuous delivery. Well, it's a software discipline where you build your application in such a way that it can be deployed to production at any time. So your quality is high enough, and your procedures for deployment are high enough or good enough that at any given time, if there's a business requirement to get an update out there, you can just do it. Now, this is not the same to say as that you're always automatically pushing out updates without human interaction. That's usually called continuous deployment. That means that's happening automatic. It's a system. It's pushing out production system all the time. The difference between continuous deployment, which is that, and continuous delivery is... with delivery, you could do it if you wanted to, but it's still a human that goes in and pushes the button to deliver the application to App Store or to deploy the software. So one of my points in some of the talks I give is that you can actually go and do this. You might think that there's no way we could ever do this for mobile applications, but I hope I can show you in a few minutes that you actually can do this fairly easily, and it's not as hard as you may think it is to implement a continuous delivery process for mobile apps. And if you do this, you're going to get the same types of benefits as you do with other systems. So what are those benefits? Basically, the three major ones are you're going to reduce your lead time, and so what is that? So the lead time is the time from someone fixes a bug, or from the time someone gets an idea, until the implementation of that bug fix or that idea is in the hands of the users. That's going to be reduced because you're continuously delivering, you're continuously pushing your application out or deploying your software to production. So that bug fix doesn't have to wait to get batched up with a production deployment that happens three months from now. So you're faster to get things out. This also means, in turn, you're getting faster feedback, right? Because if you get the thing out there faster, well, then the user has a chance to respond earlier, right? But there's also, from a technical perspective, there's faster feedback because continuous delivery is associated with a lot of automation, like automated builds, automated tests. So, for instance, if a developer makes a change that breaks the build, well, then immediately the system notifies that developer that there's a problem. So that leads to faster fixing bugs, which is way less costly. And finally, the release itself is of higher quality because there's just less stuff in there, right? It's a smaller batch because you're continuously delivering. So each thing you deliver is smaller, which means it's way easier to reason about that delivery. What's in there? How do we test it? It's a lot easier if there's only, say, one thing in there. And of course, the release process itself is reliable because you've done it so many times that you may even have automated the process that it's just easy to do and it always works because you're continuously doing it. And there's also this kind of thing, which I'll zoom into on the next graph, which is just how does the team feel as they're going through the delivery? So it's a really nice graph. It's actually completely made up. I took it from the Atlassian blog in this blog post here. And it is kind of a joke, but it's also true. But it's supposed to graph, like, how does the team feel as they're going through a manual and non-continuous delivery process? So you're kind of developing, you're looking forward to your target shipping date and things are going kind of okay, you're maybe a bit behind schedule, so you're feeling okay but a bit low. And then as time approaches the target ship date, which is when you're supposed to ship and you have caution, you're not shipping, then you kind of feel a sense of urgency. Okay, we got to get this thing done. We got to get it out there. What's preventing us from moving? And as time passes, you feel worse and worse and worse. And then finally, you hit your, like, we actually do ship. You hit your actual ship date and that's like the very bottom, but at least now you did it. And then hopefully you feel some relief that we shipped. But actually there's some interesting thing here, which is for mobile, there's an additional low here, which I kind of call the abyss of Apple. I don't know if you're familiar with the abyss of Apple, but it's basically the time from you submit your application to App Store and then it has to go through this thing called App Store review process. It's basically just waiting for the gods at Apple to approve the application that you built through a manual review process. So there's like an extended low here where you're feeling really bad. And that can take anywhere from a day to two weeks. But finally, let's say that the application is released. You feel this slope of relief. It's out there in the hands of the users, but what are they saying? And hopefully they're saying good things and you feel better and you reach the peak of jubilation where, okay, we did it, we got the release out. Until, oh, you realize, I have to go through the cycle again for the next release. So this is the emotional state of the team as you're doing this manual delivery. Now with continuous delivery, it's way more stable. I can tell you that for sure. We've implemented a continuous delivery process for the Test Cloud product itself and it made such a huge difference, both in terms of quality and in terms of how the team feels. So it's a big, big kind of push for me to get that in the hands of mobile app developers. Yeah, so I'd like to say that releasing is like breathing. Like it's automatic. You don't even think about it. It just happens continuously all the time. All right. The problem is that doing this for mobile is actually not that easy. There's continuous integration. As I said, only 25% of teams out there are doing it. And you have to set up like special hardware, like you need Mac machines to actually build iOS applications. Probably your existing CI infrastructure doesn't really have Macs in it, because why would it? And there's no book you can go out and read to figure out how do I set up a CI pipeline for mobile. We already talked about testing in the realistic environment in the actual devices. That's hard. You have to go out and buy them or you have to plug in into one of these products. And there's the whole App Store review process, which means that you have like an unknown delay between you finish until it gets out there in the hands of the users. Just really frustrating. And there's a ton of complexity around things like code signing, push certificates, provisioning profiles and certificates. So there's just a lot of stuff to learn. And this is what I'm not going to go through this in detail, but this is what a CI, a continuous delivery pipeline, might look like for an iOS app development. So you have a source control change, triggers a build, runs unit tests, integration tests, runs UI tests both on the iOS simulator and on real devices. Probably needs to go through some sort of manual testing process, because there are some things which just can't automate. But at least you should make that manual testing step as easy as possible automatically distributing the application to your testers. Then there's a ton of things like resigning, generating screenshots for the App Store page, uploading to Apple, pushing the submit button, waiting for Apple to review. Then Apple says, okay, your application is good. You have to now go and publish it. And then there's a final step that you don't even control, which is, like with web systems, you push the update out to the user. With mobile apps, the user has to go and actually update. So you can't even control the fact that they're updating. So you may feel like this is totally daunting. How am I ever going to build all this infrastructure when we have nothing today? And what I want to argue is that you can actually get started with a minimal version of this that's going to deliver a lot of value within hours or minutes or maybe even a day if you want to get your team up and running. And basically what you can do, and this example is using Visual Studio Team Services. There are a bunch of other kind of cloud, CI or continuous delivery vendors out there. But basically what you get here is reacting to source control change, tricking a build, running the UI test I showed you before in Salmon Test Cloud. So running on real devices, getting the results back, and easy distribution of the build binary to your manual testers within a few minutes of setup. So let's kind of quickly review this. I want to leave a bit of time for questions also. So basically inside of, whoops, here, Visual Studio Team Services, I want to focus just on the build part of that, which is basically supporting building mobile applications now. So you have here, I'm going to focus on iOS here. So I set up what's called a build definition for iOS, kind of specifically for these conferences. And let's look at what that looks like. So to set up a build, you need to connect to a repository. So I've connected this thing to GitHub. Azure Sample is my driving, and I connected it to a specific branch in Git, the Evolve branch here, because I used this for the Evolve talk. And then you set up a trigger, which is basically, I want to checkbox continuous integration so that if there's a change to this branch, it triggers my pipeline. You do need to have a Mac, because ultimately iOS needs to build on a Mac. I've set up kind of just this machine here and configured an agent on this machine to do the build. But you can also go to things like Mac and cloud to get a Mac kind of cloud hosted thing to build. Once you have that, setting up these build steps is kind of a drag and drop thing. You can add things here like building Samarin applications, Samarin iOS, building Xcode applications and so on. So here we have some basic boilerplate stuff, and then the pipeline is restore Samarin components, and you get packages, build Samarin iOS, package the app as an IPA file, which is the format you need to deploy to a device. And then there's a step kind of already inside VSTS to run the tests on Samarin Test Cloud right from in here. So basically what you do there is you set up an API key and a user, and a key which chooses which devices you want to run on every commit. Now we've done this, and you see now that there is actually a build that completed eight minutes ago, which passed. And that you remember the time when I had to jump back and do this commit thing? That's because I wanted to show this kind of eight minutes later. So I was changing the application background to black, and I was talking, this was actually building in the cloud, publishing and running the test on Samarin Test Cloud, and then getting results back here, and we see we got 12 passes here. And we can actually click directly into the test cloud link here to see what actually happened. I just ran a small subset of the test, so it would be really fast. But you see here that the change I made was actually reflected. So this has the background, whereas we had the blue background before. So all of that happened automatically while I was talking or drinking coffee, and you can set it up to happen with every commit. And I know there's a bit of setup here, but once you're inside, it doesn't take more than, let's say, an hour to configure this thing, and you have a CI pipeline. And I know it's not the biggest, fully perfect pipeline, but it's giving you a lot of value for a very small investment. You can also connect this to Hockey App if you're familiar with that, which lets you do crash reporting. So if there's a crash out there in the wild, you get that registered inside of this web system here. You can see there was some crashes back in February on this application here. And you kind of see, get some metrics on which OS device it happened on and kind of the iOS stack trace here. This hasn't been some polycated for whatever reason. And the other big feature of Hockey App is that you can distribute builds of your application to the manual testers, and that happens automatically. So all of this, and I'm speaking fast because I'm trying to leave time for questions, but my point is that you're getting all of this with a few clicks and a bit of configuration using these cloud products. All right. So the final point is basically this quote, which I really like. You don't have to read the full thing, just the last one, which is basically, whenever you have an opportunity to submit a build to Apple, you should do it. So you should be continuously deploying your iOS application, even though there was this Apple review process. And right now Apple is the bottleneck. It takes between one week and two weeks to actually get a build reviewed. So your maximum frequency of iOS app applications is say once per week or every two weeks, because this thing is limiting you. But something is changing now, which is really interesting. If you charge the time it takes for Apple to complete their app review process, it's been dropping for about a year now. So you see now that the very, very rightmost corner of this, we're actually down to about a day, which means you can publish new updates every day to your iOS builds also, which is a huge difference from the two weeks cycle. Does anyone know what the spike is? Huh? Higher. Speak up. Holidays, yeah. So that's not very nice. They would actually go on holiday and just everything just stops. I don't really like that. They should fix that also. But I guess they need their holidays too. So my point is automated UI testing, it takes minutes to get started. It takes longer to get good. CI delivery, you can set it up probably in a day within your team, and you have a CI pipeline, and you're basically implementing continuous delivery for your mobile apps. You can go and do it. That's the point of my talk. A bit of links, and I'm happy to take any questions, but you have to speak up because it's really hard to hear. What's their pricing of the month? So for a Sermon test cloud? Yeah. Yeah, so we have a free tier now, so you can go try this for free now. And then the entry point is $99 per month for the lowest tier, and then it scales up depending on what we call device concurrency, which is the number of parallel devices you're using at the same time. But we changed the pricing a lot to make it way more accessible than it were a year ago. And VSTS is pretty cheap also. More questions? Yeah. You can use it to test any mobile application. It's not restricted to Sermon. Hybrid apps, native apps. More questions? All right, I'm also happy if you want to know more details, or you don't like to speak in public, you can come talk to me after the talk. But thanks very much for listening, and hope you have a great conference. Thank you.
An ever growing number of mobile devices with constantly advancing operating system releases are hitting the market at a lightning pace. Creating a comprehensive testing suite is imperative to success in the mobile market to ensure your app is of the highest quality with each and every release. Unit tests can only test your core business logic. How can you ensure your user interface is bulletproof and regression free on four versions of iOS on 20 devices or eight versions of android on over 18,000 device models! This is where creating automated user interface testing for mobile apps comes in. Xamarin.UITest is a freely available testing framework that enables you to create user interface tests to programmatically interact with native and hybrid apps. Swipe, tap, or rotate any user interface element and then perform real world assertions and take screenshots for visual validation along the way. Learn how to create these tests and run them locally on your own device or simulator or take them to the Xamarin Test Cloud to automatically test your application on thousands of physical devices ensuring mobile success.
10.5446/51726 (DOI)
So basically building contracts for your APIs, making sure certain properties of types are met before your API is invoked and can do stuff at runtime. The other thing is member detection. So basically building your own type checkers, which verify that certain types have certain properties, have certain members, have certain subtypes. Then we have algorithm selection. So basically we'll specialize a certain algorithm, such as an STL algorithm or one of our own, based on the type properties that we detect. So there's a bunch of examples, and I'll show you a couple. And finally, towards the end, there's a couple of examples of compile time computation with constexpr, which really allows for a lot simpler type checking and a lot simpler computation where you'd have to use recursive templates and a bunch of complex stuff before. I'm going to just be using a lot of examples, and most of the stuff I'm doing here requires C++11. It should work with any C++11 conformant compiler, maybe except for Visual C++, which still has some incompatibilities. Some of the code requires C++14, but it can be rewritten to only require 11. And without some features such as decal type, which were introduced in C++11, it's quite hard or anyway harder to implement some of the things I'm going to talk about, and some are even impossible, actually, with the older version of the standard. So some motivation and examples. Here's an example of a situation where constrained checking could actually be useful. So if you're sitting in the back, you can't even read this whole thing. But this is basically what happened when I just launched Xcode, built a little C++ project, and tried to use as a map key a type that doesn't have the less than operator. So I built an STD map, and the key is a type that doesn't have the less than operator. Now just declaring a map like that, as you probably know, doesn't cause any issues in compilation time. However, trying to use it, so once I try and insert an element into the map with a certain key, that blows up. And this is just a very small part of the actual compiler error message that I received. But one thing to note is that the error is coming from Clang in this case. It's pretty decent, so it's telling me invalid operands to binary expression less than, where I am trying to invoke less than on values of type const foo, and foo is the key type in my map, which doesn't have the less than operator. But then if you look at some of the further notes emitted by the compiler, there's a bunch of nonsense here. So why do I care about something called a tree with a value compare, and if you look a little lower, it has some ridiculous stuff about not being able to match the operator less than for pairs of whatever. So if you're an experience C++ developer, you've seen this error before, and you know that you just forgot to declare less than operator, so your keys are invalid. But for less seasoned developers, this is very troubling. And they can spend a while just posting this thing to stack overflow and getting flamed by everyone. So better error checking, for example, in the STL could be really helpful if at compile time we could get a nice error message that says, well, in addition to all these other problems, it is likely that what you forgot to do is just to declare less than or specialize STD less for your type and just move forward. So the other example I've mentioned is algorithm selection, where you have multiple versions of an algorithm that can be more performant, more efficient for certain types. Here's a couple of classic examples from the STL, which a lot of STL implementations actually use. So STD copy can be specialized if the thing we are copying is a plain old data type, doesn't have any sophisticated assignment behavior, and you're copying from pointer ranges, not iterators. So if you basically have a mem copy situation and you are allowed to just optimize and use mem copy instead of the slower loop version of STD copy, that can be a considerable performance gains for certain types. Another example is STD distance, and distance, as you know, finds the number of elements in a range between two iterators. So for certain iterators, such as random access iterators, it can be implemented in constant time, O of 1, right? And for, say, forward only iterators, you'd have to actually traverse the iterator so you'd have an O of n, a linear time implementation. So that's another example of where figuring out some information about the type could help optimize an algorithm, select the right algorithm at compile time. And then another example, as I said, is compile time computation. So certain things can be pre-computed at compile time or solved entirely at compile time. One example that I really hope to show you towards the end is string validation. So if the user provides a literal string as an input to a certain function, we could check that that string has certain properties at compile time instead of throwing an exception at runtime. And for certain APIs, such as maybe printf, right, it can be useful to emit errors as early as possible at compile time instead of deferring to runtime. So there's a bunch of users which hopefully illustrate this is a practical thing to talk about, even though not the most practical in the world. The main tool we'll be using throughout this talk is template specialization. So this is just a very, very quick reminder of how this works, what it looks like. So basically here we have a partial and an explicit complete specialization of the less template. So less has an operator that can be invoked as a function, and it compares two objects, determines which one should be sorted before the other. We also have a specialization for pointers here, which compares not the pointers, but compares the targets, right, the pointees, the things the pointers are pointing to. And finally we have a complete explicit specialization for a char pointer where we use strCmp. So we're not comparing the char's, we're not comparing the pointers either, we are comparing the whole string until the null terminator. So that's just the tool we're going to be using, and hopefully you're all familiar with that, but it never hurts to just see a quick example. So let's start with constraint checking. As you probably know, in the standard library there's a header type traits, which has a bunch of useful meta functions which take types and return in some cases values, in some cases types. Meta functions basically operate at compile time on types and produce either values like Booleans, for example, does a type have a certain property, or other types. So for example, you could have a meta function that takes a type and adds a reference to it. So it produces the same type, but with a reference added at the end, or removes const from a type, that kind of thing. A lot of these can actually be implemented pretty easily, so if you're looking for exercises on meta functions, you can implement a lot of these type traits yourself. There's a lot of corner cases to take care of, but you could. A lot of these, however, cannot be implemented without actual compiler support. So the standard library implementation on a particular platform would have to ask the compiler for hooks, which implement some of these type traits, because there's no way to express in the language itself checking for that kind of constraint. So some useful examples. Is pointer, for example, checks if a type is a pointer type. And it sounds easy, but if you actually look at the implementation, you have to take care of pointers to functions and pointers to members, and const pointers, and pointers to const, and a bunch of other edge cases. So it's useful to have this in the library. We have more complex traits such as is no throw copy constructable, right? So does the type T have a copy constructor which is marked no accept, and will not emit exceptions at runtime? We can use a lot of these, and we can develop some of our own type traits based on what's available in the library, and based on other techniques as well. So here's a very simple example of using a type trait. This is not a standard library situation, it's just a type I figured I want to develop. It's a class called array. There's actually std array in the standard library, but suppose that's not what we're seeing here. And my array wants to be a no throw kind of type, a no accept type. I want the constructor of the array to not throw exceptions. I want all the members actually to do the same. And so if in my array I have a statically allocated array of Ts, I would like to produce a clean error message in case that type T does not have a non-throwing default constructor. So I can mark my own class's constructor as no accept, and that will produce some error message if T doesn't meet the necessary requirements, but I actually want to go further than that, and I want a clean assertion that says exactly what happened if the user uses an incorrect type. So I can put a static assertion that uses a standard library type trait and checks explicitly that T has a default constructor, a parameter less constructor, which does not throw, which is marked no accept explicitly. Another example of using traits in a situation that's not exactly error checking, but rather sort of making sure your implementation is very accurate, is this example from swap over here. So this is STD swap pretty much with some minor changes introduced. Everyone's familiar with STD swap. And what it does inside is it tries to use move operations, right? In case the type T supports move, I wouldn't like to copy to a temporary and then copy back from the temporary, I'd rather just do moves all the way. And I would like the swap operator to be marked no accept. I would like it, but I'm not actually sure if the code inside is going to be throwing exceptions or not. I don't know in advance whether, for example, the move constructor and the move assignment operator for T are marked no accept or not. So I don't want to emit an error in that case. I don't want to place a static assert in my swap that says no. You can't use this swap function if your T is not no accept because that just is plain bad. Instead, we could, in this case, for example, use the conditional form of the no accept operator. So we could say that, OK, this function is no accept in case certain properties are met. So this is the no accept operator which takes a Boolean value that has to be known at compile time. So my swap is no accept as long as T has no throw and no accept move constructor and a no accept move assignment operator. If these two properties hold, then the code I wrote is actually going to be no accept correct. So that's just another use of constraint checking and type checking that we could introduce, even if not to emit errors, but just to make sure we're really expressing ourselves very accurately. OK, so suppose we have the basic type traits figured out, the ones that are in the standard library, but we have a unique situation of our own. We have some kind of property of a type that we want to detect, that we want to test for. One example that I often use in training workshops is determining whether a certain type is an STL container. So for example, imagine you're building a serialization framework or maybe just a debug printing framework that takes an arbitrary object and wants to print it as accurately as possible. So if the type already has an output operator, then I'll just invoke that. If the type is a container, I would like to detect it's a container and print out the contents, print out the elements of the container and do so recursively. So this can be useful for serialization, can be useful for, again, debug printouts, but the question is how is this generic template print function or serialized function is going to figure out if our type is a container or not. We'll have to have multiple implementations and pick the right one, but in order to pick, we have to ask, is this thing a container? So we could go with the formal definitions of what a container is or maybe a range if we want to work with somewhat newer concepts, but still it boils down to asking, does a type have certain properties? So for example, to be very naive, I could say a type is a container if it has a begin member function. So a bunch of types might have a begin member functions and not be containers and also arrays are containers but they don't have a member function called begin, but that would be an approximation. Or I could maybe say a type is a container if it has an nested type def called iterator, right, like vector, iterator, map, iterator and so on. So again, that wouldn't work for arrays, but it would be an approximation. And if we wanted to be super accurate, we could just ask, what do we want to do with that object if it is a container? So the answer is, okay, if it is a container, I want to enumerate all the elements in the container and then I could just test for that directly. So can I invoke begin and get something that looks like an iterator which I can increment and compare and dereference? So there's a bunch of type properties that we want to be checking. So this happens a lot, again, if you're building generic libraries that have to ask for, that have to care about the properties of the given type, sometimes just to reject certain types and in other cases to specialize your decision based on properties of a type. So let's build something that detects whether a type is a container. There's a bunch of techniques for this. Even in C++98, we could sort of ask if a type has certain members, for example, or if a type has certain nested type defs, but in C++11 with decal type and with a bunch of further advances, we have easier solutions for this problem. So again, we want to build our own type traits which ask, does a type have a certain member or more generally, does a certain expression make sense for my type? Does it compile? And even maybe does it produce a certain type as a result? So for example, not just, for example, for the dereference operator on iterators, I actually wanted to produce a value. If I have a void returning dereference operator, that's a little fishy for a container. So we want to ask if a certain expression makes sense and maybe produces a particular type. So there's a very cool trick here introduced by Walter Brown, I think a couple of years ago at CPPCon, I was very impressed when I saw it first. There's a bunch of other ways to express the same thing, but this one is so clean and so concise that I really fell in love with this technique and I've been telling about it to people since. So basically, you have this super simple and weird looking template. This is actually a Lalias template which defines void t as void for any type. So this is a template, you could say void t of int, void t of float, void t of std vector, and it's always just void. It's always defined to be void. The point is not that it is void. The point is that if we try and use this thing, if we try and use void t with a type that's not well formed, with a type that's not valid, then this thing just has no definition. The thing doesn't make sense. So void t wouldn't be sensible, whereas if we use any valid type as the template parameter here, we would get void always, just regardless of what we put in. There's alternatives for this in the standard library, but it's really clean to just see it this way. So here's how we use it for isContainer in our example. Up top, there's the first version, the more general version of the isContainer template, and it has two template type parameters, t and another one, which is unnamed, I don't care what it's called, and it has a default type parameter value which is void. So this one is the more generic template, it's not a specialization, and this one is defined to inherit from std is array of t. So basically, if we go to that isContainer template up top and we ignore everything else on the slide, then it's basically equivalent to asking is t an array. If t is an array, then this template recognizes it as a container, and if t is not an array, then we don't recognize it as a container. So we've taken care of arrays separately. And isArray, of course, is defined in type traits, and we assume it works correctly. And then we have the specialization. It's occasionally unclear why this thing is even a specialization of the first thing. So how would you convince your friends that this second definition is a specialization of the first? It has fewer type parameters. It has fewer type parameters, right? So the one up top has two type parameters, even though the second one has a default value, but the one on the bottom only has one type parameter. So that's enough for me, I suppose. And this is container template is a specialization in case, well, t, the first argument is just the t we got passed in. And the second thing, the second type parameter, which has a default value of void over here, the second type parameter is that void t template specialized, well, sorry, parameterized with t iterator, right, with the nested iterator type def. And if we use this specialization, then I just inherit from a true type to indicate that yes, we are a container. So basically, if this specialization is chosen, then we deduced that t is a container. Okay, so for this specialization to be chosen, what needs to happen is that void t of that type is well-formed. If it is not, if our type is, for example, an int, then what we have over here is just int column, column, iterator. That doesn't make sense. That's not well-formed. And then substitution failure kicks in, and we just ignore this specialization. And we have our base case, which will then deduce that int, because it's not an array, is not a container. But if t does have a nested iterator type def, then this thing over here is well-formed, and this is just plain void, right? And then this is a specialization of the previous case, and this specialization is a better match, so the compiler, when doing the template specialization pattern matching, would pick this version and deduce that our type is a container because we inherited from true type. Why, just one final thing, why does it matter that this here is void? Why did they put void in here and not something else? I was looking for the same person to answer. Yeah, no, so what I'm asking is why does this have to be void and not say int, right? Because this has to be void, and the thing that happens is if the user of my template of isContainer doesn't provide the second type, right, so my user says isContainerOfInt. The user doesn't specify what the second type is, so if the user says isContainerOfInt, I would like both versions to be potential candidates. So in this case, isContainer, the first version, would be isContainerOfT and void, and this one would be isContainerOfT and whatever void T produces. So if void T produces void, we have our two candidates, and the more specialized one can be picked. But if here, I said int, for example, then when the user says isContainerOfWhatever, then what the user actually said is isContainerWhatever,int, right, and then the second specialization would just never be valid because void T doesn't produce an int, it produces void. So all the parts here matter, right? It does matter that we have void up there, it matters that void T returns void. Everything's important in this skeleton, if you will, for member detection. Now you might ask, can this be generalized further? And yes, it can. So here, we're just testing if a type has a certain nested type def, but we can make this further generic, if you will. So if I want to test that a certain expression, arbitrary expression pretty much on arbitrary types makes sense, and that's the trait I am checking for, that's the property I'm testing for, then the general recipe would be. We define an alias template for that specific test, and it's basically just the decal type of whatever this expression returns. So suppose I want to check if T1 has a member function called foo, which takes a T2, right? So I have this alias template, which is a template in T1 and T2, and it is declared as the decal type of this expression here, which declares a value of type T1 reference, and then invokes foo on it, passing in a T2, right? And the decal val function is just a very stupid silly little function from the standard library. It has no body, it's just a declaration, because we use it in context which do not require evaluation. So it's basically just declaring T1 for us or T2 for us. That's the only thing we care about. Once we have that test, we just plug it into our skeleton. So the generic version of does my condition, is my condition true for a certain pair of types, the one over here, the generic version, takes T1 and T2 and that dummy void template parameter, and the specialization takes T1 and T2 and whatever void T returns when invoked with my test, which is up there. So if that expression makes sense, if this thing here is well formed, then void T produces void and we pick the second specialization, and if it is not well formed, then we pick the first one and we reject that property. And we could even test if we wanted that my test T is not just well formed, we could ask what kind of thing it is. So I could add, for example over here, I could add, so is my test T of T1 and T2 actually say an int or a const reference or whatever else I wanted to assert? So we basically have now a generic recipe for checking properties of types thanks to this void T trick. This again, it requires compiler support. So for example, my usual go to compiler again, which is Visual C++, still has some restrictions around expression substitution failure is not an error, expression is finite, so it would actually reject a bunch of these examples. But the more conforming compilers, GCC, Clang, they're okay with this. And Microsoft is working on fixing the edge cases that still don't work. And actually this kind of template is pretty useful in the standard library as well. So just as an example, in the standard library, there is a type trait called is assignable, which just determines whether one type can be assigned to another using the assignment operator. You could easily implement it using this generic trick, right? So over here in the decal type body, we would have just decal val A assignment, decal val B, and we'd have to test if that is well formed. So standard libraries all over the world can now use this instead of requiring compiler hooks or some sophisticated one-off tricks. There's just a generic trick that works for all the cases. So that was member detection. And once we have that, let's talk about some ways, right, some strategies for actually using the property we discovered for choosing one of multiple algorithms. Again if we want to optimize STD copy, for example, for pointers to primitive types as opposed to the general case of just iterators we know nothing about. So there's three general strategies I want to show you, which are common in a bunch of different template libraries, including the STL. One is just explicit specialization when you write all the boilerplate yourself. The other is tag dispatch, which is useful and used a lot in the STL, where you have multiple versions of an algorithm. So basically the property you're testing is not binary, true or false. It has multiple degrees of being true. And there's also STD enable if, which is usually used if you want to disable a certain function, if you want to remove a certain function from the set of overloads that you would want the compiler to consider. So let's take a look at a few examples. Again, just an overview of the tricks that a lot of libraries use. So this rather verbose thing on the screen is an attempt to specialize the sort implementation, just a general sort that takes iterators for two cases. One when the type we're sorting just has general properties, is generally comparable, right? So it has a less than operator, which provides order. And the other case is if it's hashable, if there is a way to produce hashes from values of that type. So if it is hashable, maybe I'm going to use a different sort algorithm like radix sort, and if it's not hashable, I'm going to use something else. It's just an example. I'm not saying there's a practical use of this kind of sort implementation. But we want to ask about the property of the type and then just redirect to two different implementations. So I can have this base case of a sort imple class template, which is undefined. Basically I don't care about it because I am going to have specializations for both of the cases. So this base template is a template of some type and the Boolean. And then I have two specializations. I have one for some type and false, and another for some type and true. So I basically covered all the cases because Booleans can be either true or false. So there shouldn't be any case where we actually get to the base template and the compiler would complain that we haven't actually defined it. So for false, I have one definition of a static sort function, and for true, I have a different definition of a static sort function. So basically this is just a way of picking overloads at compile time. I have two functions in two different class templates, and I'm choosing the right one based on whether some property is true or false at compile time. So now I just have to ask, is that property true? And redirect to one of the options. So in this version, which is the publicly accessible one, that's the thing that clients actually invoke. In this version, I specialize sort-imple, I parameterize sort-imple by whether the value type of that iterator has some property. So is hashable v? That's not a standard library thing, but it's a type trait that we built using the techniques we talked about previously. So sort-imple of my iterator type, and either true or false, just picks one of the two class template specializations we had over here, and then I invoke the static sort method on that passing in my iterators. So we have just successfully picked one of two implementations based on a Boolean property of some type. This is a bit verbose, but it works, and in some cases, you might actually prefer it. For example, if that type that you're using actually has to have some state and member functions, it's actually useful to just pick one of multiple class template specializations and then just work with that class template, kind of like the strategy pattern, except you're doing it in compile time and not at runtime. And this can be obviously expanded further to a bunch of additional examples. The other alternative, which is pretty common in the standard library, is tag dispatch. This doesn't require class template specialization, just multiple overloads of a certain function, and the overloads are different only by usually a trailing type which can take multiple type values. So here's a classic example from distance. This is an STD algorithm that finds the distance between two iterators, right? How many elements are in the range between two iterators? So this is the publicly accessible declaration. So distance takes two iterators. What it returns is, well, whatever iterator traits difference type is for my iterator type. So conveniently, we have iterator traits, which is supposed to tell us things about iterators. This is yet another example of a meta function that takes types and returns values and types for us. And then I invoke distance helper, which is not a publicly accessible interface, and distance helper takes the two iterators first and last, as well as an additional parameter. And this one is also obtained from iterator traits. So in iterator traits, I have iterator category, which is a type. So I instantiate that type over here, and that's what I pass to distance helper. So now distance helper can be overloaded for various types that iterator category can return. Just for simplicity, suppose iterator category is sometimes an int and sometimes a float. So I can now build two distance helpers, one that takes an int and one that takes a float as the third parameter, and in such a way specialize my implementation. Iterator category, fortunately, doesn't return int or float. It's a meta function that returns one of multiple types, indicating what kind of iterators we have on our hands. So it could be forward iterator, an input iterator, bidirectional iterator, random access iterator, and so on. And here's what the actual helper could look like. So for random access iterator tag, which is the iterator category for random access iterators, my implementation just uses the minus operator, right? So we just find the distance between the two iterators. And for anything else, this is the second version of distance helper, it takes iterator tag base, which is the base class of all the other iterator categories, of all the iterator categories. So for iterator tag base, we have the implementation here, which declares a counter and then just traverses the range, incrementing the counter till we reach last, right? So while first is not last, increment last, increment n till we get to the end and then return n. So this is going to be the linear time version, and that's going to be, hopefully, a constant time version. And we just used a property, a type property, returned by iterator category to choose one of the two. So that's a pretty common technique as well. And a third example I want to show you is an A... Just like a lot of classes. And it also has this constructor over here, which takes a universal reference to any t. And the reason you'd usually have that kind of constructor, does it work now? Okay. So the reason you'd usually have that kind of constructor, which takes a universal reference, is not to emulate a copy constructor and a move constructor on your own type, but rather to enable construction from some other type. So for example, maybe my widget, it can actually be constructed from, for example, a tuple of configuration values. And I want to optimize for two cases, one when I am passed an r-value reference, which I can move from, and the other when I'm passed a const reference to an l-value, which I can only copy from. That's a pretty common situation. But even ignore the reason why I actually have that kind of templated constructor. The issue here is hiding. What can happen is that if the user of my widget class initializes a widget and uses an l-value reference to a widget, which is not const, so basically just widget ampersand, right, an l-value reference to a widget which isn't const, then the compiler can choose between my copy constructor, which takes a const reference to a widget, and this template constructor, which was never intended for situations where t is a widget, but it would apply in that case, and it would be picked, because this constructor works where the t ref ref actually resolves to just a widget ampersand, to an l-value reference to a widget. So we have a constructor that takes a const reference and a constructor which takes a non-const reference, and that second constructor would be preferred. So essentially, we built a universal constructor which hides our own copy constructor. This is undesirable. So we can ask the compiler, essentially, to not pick that specific overload, that specific constructor, if I'm using a widget as t. So the way this typically works is that somewhere in the signature, and there's a lot of places you could actually put that in the signature, somewhere in the signature, you put enable if, and enable if is very similar to void t, actually. It just determines whether an expression is well-formed, and then it's used for substitution failure to remove that overload from the set of overloads that can be considered. So in this case, the expression I'm passing to enable if is not is same t widget reference value. So basically a Boolean which says whether or not t is the same thing as an l-value reference to a widget. So if t is the same thing as an l-value reference to a widget, this expression, because of the not here, this expression is going to be false. And then we have enable if of false, and enable if of false is not defined. So it's not a type, it's not any type, whereas enable if of true is void, kind of like our void t template. So in the case where enable if t here produces void, this template overload is considered valid, and in the case where enable if t takes false, and it is undefined, this constructor is just removed from the set of candidates because it is a substitution failure when trying to match the template arguments, the template type arguments. You could place that enable if in a bunch of additional places, maybe in, when reviewing template libraries, you might have seen enable if as an additional parameter, right? So in some cases you put it as an additional parameter which has a default null value. You could use enable if as the return type, not in a constructor though, constructors can't have declared return types, but in other functions you could use enable if t is the return type, again just to get the compiler to consider that type as part of the function's signature. Again if it doesn't match, then this signature, that overload is removed from the set of candidates that the compiler would consider. So in this way we could just disable this overload in certain circumstances, and this trick is used a lot in the STL, yeah? Is there a way to apply this trick if it's not in some kind of place? Is there a way to apply it? Without the constructor constructors? What do you mean? If you have templates like the NT of rigid and you have a copy constructor and a move constructor, and you wanted it to choose the copy constructor in the cases where it was actually copying instead of moving. As you said, the move constructor will always be preferred if it's a long const. So if you declare a copy constructor and the move constructor, then the move constructor will be chosen if you have an r-value reference, if you have a reference to a temporary, if you have a reference to an l-value, then the copy constructor will always be chosen automatically. Just think about it, your move constructor is going to be moving, so it's going to destroy the object. If you have an l-value, that would be extremely unsafe. You don't need this trick at all if you just declare a copy constructor and a move constructor and you want the compiler to pick. So in that case it doesn't actually collapse to its own reference? It was not? It didn't collapse to just a reference in that case? So you're saying no template here, no universal reference, just a copy constructor and a move constructor, right? No, I'm saying that the class is template. Oh, that's fine. And the constructors, what do they take? Do they take widgets or do they take one of the template arguments? Type name, T. Type name, T. So you still have a universal reference on the T. But just one constructor? Both, a copy and a move constructor. So they're not a copy and a move constructor if they don't take the exact type, right? So if you have a constructor that takes, for example in this class, if I had a constructor that takes an int, you wouldn't say that's a copy constructor or a move constructor. It's just yet another constructor. So if this thing was a template of whatever and you had a constructor taking whatever and not widget of whatever, that's not a copy and a move constructor. It's just yet another constructor. Yeah. Hopefully that clears things up. Okay. So we do have a few minutes left. So I want to show you real quick just one more example of using sort of the same techniques for a slightly different scenario for compile time computation. And first, I just want to make sure we're all familiar with constexpr. So basically in the good old days, to perform compile time computation, you'd have to do stuff like that with recursive templates. That's just a classical example. I'm sure you've all seen that thing. So for example, here I'm calculating factorial by having a class template specialization for size t where the base case is defined as one. And then all the other cases are recursively invoking that class template until it's specialized down to zero. So first of all, we can replace a lot of these recursive templates with constexpr, which was introduced in C++11. So when applied to a function, constexpr means that in some cases, the value that this function returns can be obtained at compile time. So you don't have to actually invoke it at runtime. You can just sort of run through that function at compile time and get the output. So here's factorial as just a constexpr function. And obviously, the advantage here is that even though the thing is calculated at compile time, you still specify it exactly the same way as any other function that would be calculated at runtime. Furthermore, you could use that factorial function with values that are known at compile time, and then the result is known at compile time, or with values that would only be known at runtime, and then the result would only be known at runtime, which is nice. In C++14, a lot of restrictions on constexpr were lifted. So for example, now in constexpr functions, you can use loops, and you can declare local variables, which you couldn't in C++11, where the whole thing had to be a single return expression. So that opens the door for a bunch of additional things that you can express as compile time computation. And you can even now have types which are constexpr essentially. So here's, for example, the complex type. Tuple has similar situations. So here's a complex type where the constructor can be constexpr, and it has a real and a set real function, which are constexpr, and the conjugate function that can be constexpr. So basically, we can now express computations on complex numbers, and they will be performed at compile time, which is rather cool. I think what's even more cool is that const and constexpr are two totally distinct concepts, right? So for example, here's a function, which is constexpr, but it's not const. It's actually mutating an object. That mutation just happens at compile time. So that's what we're saying by using constexpr here. This function here, it's mutating a value, but it's happening at compile time, so that's what constexpr means. Great. So here's the very, very quick overview of constexpr. Some simple uses are rather nice. So for example, you can replace the horrible array size macro, which takes an array, a statically defined array, and returns the number of elements in that array. You can replace that with a constexpr function template, which is where the template arguments are the type and the number of elements in the array, and we just return the number of elements. And that array size can be used at compile time situations. So for example, you could declare an array whose size is array size of some other array, which is a rather common use of array size. So that's a compile time known value. Another example is the power function, which can be calculated recursively. So 2 to the power of something comes up a lot. So these are simple uses, and there's a bunch of them in the library and in code you'd write yourself. But here's a slightly less obvious use, where constexpr can actually replace a bunch of class templates, which you would use to define meta functions before. So our task here, in this particular case, is to build a compile time facility, a compile time function, basically a meta function, which takes a certain type trait, a Boolean property of types, such as is integral or is floating point or is no throw move constructible, some kind of Boolean property of types, and an open collection of types, just any types we want, t1, t2, t3, an arbitrary number of types. And we want this thing here, and f, to evaluate whether this Boolean meta function is true for all my types, t1, t2, t3, and return true or false, which would be known at compile time. So this is a classic scenario, which you would solve using class template specialization before constexpr was available. You'd have to build a couple of class templates recursively. They would have to derive or extend each other. Here's the constexpr solution, which for me anyway, in some cases, is clearer. So here's the base case for endf for this little function. The base case doesn't take any types. It only takes the operator. It only takes the binary function, the Boolean function, such as is integral. And it returns true, because if you apply that function to no types, then we could say it is true for all the types, for all the no types we have. And then the interesting case, the recursive case is over here, where we have that operation, the Boolean property of types. We have the first type, and we have all the rest of the types, and the rest can also be empty, so we could be down to just one thing. And we just perform a recursive invocation, right? So we ask, is op of t true? And the recursive invocation for all the rest of the types is a true as well. And that's what we return. And the whole thing, of course, is constexpr, so it's all evaluated at compile time. It's not like at runtime you're actually going to have any recursive invocations. That whole thing just folds down to true or false at compile time. And for me, in some cases, this constexpr function style is more readable and more understandable than class templates, which would have to interact with each other through specializations, especially if the thing you're working with is values and not types. So in this case, I'm working with values. I have true, false, which I have to combine in some way, and not types, which would be harder to combine using just functions. So my final example from the constexpr area is one I've come across about a year ago, maybe a little more, which is parsing strings at compile time. So here's a little class called cstr for compile time string, which has a constructor from an array of characters with a known size. So it can basically be initialized from literal strings, right? literal strings known at compile time. So that's an array of characters, and the size is also known. What this constructor does is just initialize members, the pointer to the beginning of the string, and the length of the string. And now I can have, in that class, multiple constexpr operations which interact with these fields. So for example, I could do length function, which would return length. I can do an isEmpty function, which returns whether length is 0. I could also do a function which looks for a certain character in the string and returns the first occurrence. And that function would still be constexpr because this whole thing is performed at compile time. And one example of using it is for printf-like situations where I want to test that my format string is valid. So just think about it, if the format string is a compile time literal string, which happens in a lot of cases, you can validate it at compile time, not waiting until run time to make sure the format string is valid. So in this very simple example, I have a static assertion which makes sure that the count of percent signs in my format string is the same as the number of arguments passed to the printf function. So that's obviously a simplification because in printf format strings, you can have double percent sign and that's not a placeholder. But the general idea still holds. We can test at compile time whether a certain string literal has a certain number of percent signs and reject at compile time that format string if it doesn't. So for example, if I invoke my printf function with two arguments but only one percent sign, the compiler could complain and that static assertion would fire and tell me number of arguments doesn't match the format string. So obviously this is just a prototype, right? But it can be used in a lot of situations where I have to validate strings at compile time. For example, imagine a regular expression constructor that tests that the regular expression is well formed, a bunch of things that are quite easy to express once we have constexpr because you can essentially reuse the same code, the same algorithm for testing things at compile time and at runtime. So that's just a wrapper case for my examples. Right, so we have about three minutes left so if you have any questions, there will be a little time I hope. I hope you've seen some examples of using template metaprogramming in a variety of tasks in both libraries and applications. So admittedly, a lot of these examples are from the library world, implementing STL algorithms, optimizing them and so on. But hopefully application level examples can also be extracted from these samples. And with the tools we have in C++ today such as DecalType, such as constexpr, it's a lot easier to build these kind of solutions than it was in C++ 98. Now if you're seriously considering investing in TMP libraries, instead of building these basic tools yourself, there's a bunch of options to choose from, right? So you don't have to build the basic combinators, for example, for testing whether a certain property holds for multiple types, that kind of thing. You don't have to build yourself unless you just enjoy the process. I enjoy the process, but still you don't have to build these things yourself. There's a bunch of libraries including in Bust, some modern, some older, which provide solutions to common tasks that you have to perform at compile time. So this is all quite fun and hopefully will be applicable to at least some situations you encounter. And if you have any questions, we have about three minutes. Yeah. Yes. Why would you use such a station for the helpers? Instead of? Instead of the training type. Instead of? The extra pyramid. Okay, so just to repeat the question, why did I show you an example of using class template specialization and tag dispatch instead of just using one of them? So first, it's nice to see multiple options. And second, tag dispatch is usually used when you're trying to invoke a function. If you have a whole class template that you want to pick and that class template would have members and would have behavior, so state and you have multiple functions to invoke, I think it would be more convenient to use specialization. I think it's also a matter of taste to some extent. All right. Well anyway, thank you very, very much for coming and I'll have some samples for you on Twitter later if you'd like to explore some labs yourself and the examples I've shown you here today. I hope you enjoyed the rest of NDC. Thank you very, very much for coming. Thank you. Thank you.
Template metaprogramming (TMP) is an extremely important technique in modern C++. First, TMP can be used as a precursor to C++17 Concepts, in order to check constraints and produce clear error messages when a template parameter doesn't adhere to its specified constraints. Second, TMP can be used to pick an algorithm implementation based on the template type provided -- thus enabling optimizations for specific types. Finally, TMP can be used to introspect C++ types at compile-time and generate compile-time constructs that save time or enable fully-compile-time computation. In this talk, we will review a collection of techniques and examples for using TMP in your libraries and application code. You are expected to understand basic template syntax and template specialisation, and we will build the rest as we go along.
10.5446/51694 (DOI)
Okay, so thank you very much for coming along this morning. My name's Gadi Park. My contact details are up there. If you have any questions, feel free to reach out. I'm here to talk to you about a tool, a build orchestration tool called Cake. My aim today is to help answer some of the questions surrounding what is Cake, why you would use it, and also provide some demos at the end as to how you can actually use Cake to achieve that cross-platform build automation that a lot of us are now wanting to achieve with the advent of things like.NET Core and platforms and environments such as Mac and OS X that we want to target. So the first question that we really want to answer is what is Cake? So first up, for those of you who have come here expecting actual Cake, I apologize, there is no Cake. Just when I say you wouldn't want any Cake that was baked by me. What we are here to talk about instead is this Cake. So first and foremost, Cake is an open source project. It's been around since 2014. It's hosted on GitHub. It was started by a Swedish colleague of mine called Patrick Spencer, and then it was later joined by another Swedish colleague called Mattias Carlson. I then joined the project. So full disclosure, I am a member of the Cake contribution team. If there is anything wrong with Cake, it's probably my fault now, so feel free to pick me up on that. I joined the tail end of 2015. It's a reasonably small project, but it is growing in popularity. We have almost 400 pull requests into the project with 60 different contributors, as well as over now 40 third party add-ins for the Cake ecosystem. I'll come to that in a bit what that actually means. We're happy to say that as of today, we've got about 55,000 downloads on UGet. So again, we're happy that it's an evolving project and people are taking an interest in it. So try and answer the question about what is Cake? This is the definition from the CakeBuild.net website that I encourage you to go and have a look at. So what we're trying to say is it's a cross-platform build automation system with a C-sharp DSL to do things like the compiling of code, the copying of folders, and so on. But at the end of the day, what does that actually mean? What we mean by that is that Cake is an EXE. It's just an executable that you would run as part of your build process, and it will take as an input a build.cake file, and that build.cake file will have the definition of what your build is. It's a script processing engine that uses Roslin and or the monocompiler under the hood to take that script, compile it down into an executable piece of code, and then I'll run that against the platform that you're running on. The reason that we have both Roslin and the monocompiler just now is just to allow that cross-platform utilization. At some point, we will be switching to just Roslin once you're underlying Roslin's scripting engine to become fully cross-platform. We'll be switching to that in favor of using Mono, but that's a step down the line in terms of the roadmap. Before I go too much further into what that actually means and what Cake actually is, what I want to make sure is that we're all talking about the same thing in terms of what is a build. For me, a typical build workflow starts like this. We have a build. We want to compile something, whether that's out of Visual Studio, whether that's out of MonoDevelop, whether that's out of some other IDE. We have something that we want to build. In order to do the build, you might have something like a package restore step. If you have new get packages within your solution, you have to first restore them as part of the build in order for the build to succeed because you've taken those dependencies. On top of that, you might want to run unit tests because, as everyone knows, there should be unit tests in a project. Not everyone has them, but that's the ideal is you have some unit tests to go with your project. In order to complete the build, you might have something like a clean step. Clean step basically just tidies up the underlying folder structure to remove any artifacts as part of the build process because the last thing you want is a build succeeding only because the last one succeeded. So you have a clean step. On top of that, you might have some test coverage because what's the point in running tests if you're not actually looking at the results of those tests and making sure that you're either improving or things are getting worse. Then if you are, depending on what kind of developer you are, if you're like me, you want to run some sort of stack analysis on your code to make sure that everything is meeting the standards of your project team. So you might use tools like StyleCop or DukeFinder, FXCop, or InSpecCode. Those are all utilities that will basically look at your code base, make sure that all the line endings are the same, that you're not doing anything that you shouldn't be, and so on. You might have a package step. So what's the point in building something if you're not actually going to make it available for consumption elsewhere? And then there might be a final step, which is the publish step. So that's to take the output of the build, whether it's an MSI, whether it's a new get package, whatever your output is. You might want to push that somewhere so that other people can consume it, especially if it's a paid-for product, you want to make sure that it gets there. So what I'm hopefully describing there is something that is common to everyone. It's a fairly typical workflow. Obviously, your steps might differ, but what I'm going to show here is hopefully that a cake can help you to fulfill that workflow in a common way. Because what cake is going to do with, once we get into the innards of cake, what cake is going to do is it's going to create what's known as a directed acyclical graph of all of those tasks, and it will make sure that as part of the build process that each of those tasks is ran at least once as part of the build. So with that in mind, just a quick brief overview of what cake actually is and what it does. So cake, as I said, is an EXE. That EXE takes, as its input, this build.cake file. I'll show you some examples of what the build.cake file is and what it means as we go through this. In addition to the build.cake file, cake accepts a number of arguments into the raw cake.exe, as well as the invi-dent variables and or a configuration file that you can actually specify within your repository as well. What that allows you to do is fully customize how cake is going to work. Once you've got both the cake file and the arguments being passed in, as part of your build script, you have the option of using what we call preprocessor directives. What they allow you to do is to add any tools or add-ins into your build workflow. Because as I mentioned before, if you're using something like InSpec code, you need to get that tool from somewhere. We don't recommend that you put those tools directly into your source control repository because, one, it makes your source control repository big and you don't need to. You can get it from elsewhere. You can use the preprocessor directives to pull those from some source on the Internet and make it part of your build. In addition, because it's just C-sharp, if you wanted to reference just a DLL or a NuGet package to consume the ability, for instance, let's say you wanted to use something like JSON.net, you wanted to parse JSON as part of your build process, you can add a direct reference to the Newtonsoft DLL and then within your cake file, you can just write the normal C-sharp that you would for parsing that JSON and using it as part of the build process. Once cake's got all of those things, what it's going to do is it's going to spit that out to either Roslyn or the monocompiler. It's going to take that script and it's going to make an executable piece of code out of it. So, basically at that point, if there are any syntax errors or any problems with the script, it's going to fail. It's a compilation step. So it will fail as part of the build, but you'll get notified of that just in the standard way with a stack trace and some pointers as to how to correct that. Once all that's done, it then actually executes the build. The output of that build is literally anything that you can think of. Whether it's, as I said, it's an EXE, whether it's an MSI, whether it's a NuGet package, whether it's a deployment to Azure, to Amazon, literally anything can be done as part of this build process. So the world is your oyster. So I can't stress enough that the familiarity that you might have in C-sharp also extends directly into the build.cake file. So whatever you can do in C-sharp, you can do within the build script as well. So the next question that you might be asking is, well, you've spoken about all these tools and what you can do within the build process. So what tools can actually be used with cake? And I like to show this in this form. So the items that are appearing here in black, the black text, are all of the tools that we have available within core cake. So as soon as you have a reference to cake, you have the ability to use all of those tools. The ones that are showing up in blue are ones that have been submitted via our community members. So they've come as direct add-ins to the project. That's actually where I got started in the project. I'm also a contributor to an open-source organization called Git Tools that have tools like Git Version and Git Release Manager. I saw that cake as a product didn't have the ability to run those tools. So I used the, initially I started using the extension points within cake to have those as third-party add-ins into the product, but they've subsequently got merged into the core product as well. So what we've got in here is the ability to do things like, we basically cover all the unit testing projects and the ability to run test coverage. We also have, if you notice in there, we have things like Slack and GitR. Because as part of the build process, you need to know when something's gone wrong. And with the explosion and the views of Slack and things like GitR for communication channels, you can post a message directly into a Slack channel or a GitR room to notify people that something's gone wrong with the build. So although there's a lot up there, hopefully you'll see that there are some that you're already using today and you'll immediately be able to start using within cake. And on top of that, as I said, if there's something that's not in there that you actually require, the extension points within cake make it really simple to add that in. So the next question that you might be asking yourselves is, okay, but why do I need it? Okay, so what I want to emphasize here is that for me personally, cake is all about the familiarization with C-Sharp, but on top of that, it's the ability to have a build system that is both maintainable and consistent across those different environments. So what I mean by that is if you're just in Visual Studio and you're doing the Control Shift B to build the solution and that's your workflow, then that's great. The problem stems from the fact that if you go on holiday and someone else has to take over the project and it doesn't build on their machine because they don't have an SDK installed or they don't have a tool installed, then the build fails, then that's where the communication channels break down. So having something like cake, which allows the build to be orchestrated in such a way that it can run on any machine, it takes that concern or that problem away by making it or kind of forcing you down the path of making it work on every machine. But again, it's just in C-Sharp and that's really the big draw for it. The other part of the process is, as we've mentioned, is that within Visual Studio, if you're just doing a build within Visual Studio, you might be using a test runner like Resharper or Encrunch to run the tests, but again, you want to take that out of Visual Studio. You want to have that being common for everyone to the point that you could actually hopefully run the build on a machine that doesn't have Visual Studio installed. You want to be able to run the build just with the basic SDKs and the tools installed rather than that dependency on all of those property. And probably the final most important part is that we want to eliminate the problems with human error. We all make mistakes, but by putting it into a scripted process, then you're allowing the computer to take over at that point. You're not worried about, oh, did Bob remember to do this step when he did the build or did Bill remember to do this step? It's all, it becomes a documented part of your process because it's part of your source control repository. So hopefully that answers the question on why do we need it. The other kind of questions that I want to hopefully answer today is, what are the core philosophies of cake? Because cake as a product has a number of ideas about what the build process should look like, but we also don't want to force those upon you. We want cake, we can make suggestions, but cake is flexible enough to allow you to run the build however you want. But some of the core philosophies of cake are it should be non-intrusive. So some of the other build processes or build pipelines that you might have used require modifications to things like your CS Proj files in order for it to work. Cake is fully non-intrusive, but what I mean by that is it's a standalone build.cake file that will reside beside your code, and that's it. It doesn't require any other tie-ins to the system. So what that means is, or what we like to say is, you can start using cake without telling your project manager about it, and once he's seen the power of what power cake can give you, you can then just slot it in because you haven't actually affected any of the rest of your process. The other thing we like to say is it should just work. We have tie-ins and aliases for all of the most common tools. They're being run on a number of different build systems now. For instance, we use cake to build cake. So we like to do that because we can then obviously see any problems that are creeping through. So if things aren't working, there's something wrong with cake, obviously, but we test it to an extent every time we do a release. So we hope it just works. We also want cake to be highly configurable. So what we mean by that is part of the build process when you need to change something. You can do that. It's highly configurable, and with the most recent release of cake, it's actually completely modularizable now as well, so that if there's something within cake that you don't like, you can rip that out and replace it with your own specific implementation of that particular part of cake. The other core tenant that we like to suggest to people is that there should be no binaries checked into source control. Simply put, they're not required. You want to have them coming from somewhere else, and that goes for cake as well. And what we'll come on to as part of the demonstration is we'll show how we, as a product, suggest that you bring actual cake, the cake.exe, into your pipeline. As I said before, we want it to be easy to implement your own tools, and I can vouch for that because that's where I started. And the final tenant is that we want the build to look and behave the same regardless of what operating system you're working on or what environment you're running on. And the most easiest way to show that is by showing you this. So that is on our GitHub repo. It's within the readme.txt. We build cake with cake on eight different CI servers, and we run it across three different operating systems. So you might think that that's a little bit excessive, but there are certain parts of the tooling that behave differently depending on what CI server that it's running on, what continuous integration server it's running on. For instance, something like the unit testing framework end unit, it knows that it might be running on something like Team City, so it automatically pushes test results into the Team City system via what they call service messages. So we like to make sure that cake, when it's utilizing these tools, continues to function as we want it to. The eagle-eyed amongst you might notice that the Travis build was failing when I took that screenshot. That was to do with NuGet. Travis and NuGet aren't getting on very well just now, but I wanted to leave it up there just to show that we're not infallible to the build processes. We're still subject to external failures as well. But we have actually put some additional retry logic now into our own build so that if the Travis build fails, we'll retry it again as part of the build process because typically it's the NuGet package restore that's failing. So we've just added in the poly tool, which will automatically retry that NuGet package restore as part of our build. And nine times out of 10, it continues to work as expected. The next question that I'd like to address is, can I just use fake or make or CMake or Menace build or Nant or Fisaki or Bayou or any one of these other existing build tools? The simple answer is yes, I'm not going to stand up here and tell you that you have to use cake. If you're already using those tools, then continue to use them. I mean, I can't force you to use cake. I would like you to use cake, but I can't force you to. But what I like to say here is that we've heard through our community members and various conversations that we've had is that there's this concept of, well, if you're doing a C Sharp development project, then using the build system to learn another language is a good way to do it. So if you wanted to pick up PowerShell, for instance, you might want to use Saki or if you wanted to pick up F Sharp, then you might want to use fake as the build system because then you're kind of being forced into that different language and it's a good way of learning. And I fully agree with that to an extent. The extent is when that build fails, you then have to do a mental mind switch into that other language to fix it. And the problem that I've seen even with my own company is that you become that one person who knows how to make the build work because you're the one who put it in place. Using something like a thick cake makes sense if you're doing a C Sharp project because you don't then have that mental mind switch between the two languages. You simply then have, you're doing a direct port from one language to the other and you're fixing the same problems. So what I would say is if you're doing a PowerShell project, use Saki. If you're doing an F Sharp project, then use fake. But if you're using a C Sharp project, then use something like cake or buy or one of the other C Sharp based systems that are out there. But what I try to avoid is that mental mind switch that has to happen as part of the build process. So without further ado, what I'm going to then do is I'm going to switch on to the demos because that was just a very brief introduction. I think the heart of this talk is really about the demos and what you can actually achieve. So what we're going to do is, what we're going to try and do as part of the demonstrations is we're going to take that typical build workflow that I spoke about earlier and what we're going to try to achieve are the ones in orange here. So we haven't really got time within the time allotted to do everything. But hopefully, if we can get those ones done, you've got a clear indication of how you can start building with cake and then start plugging in the different parts of the build process that you want. If you're interested in the demos that I'm going to show you, you can get them from there. It's a very simple GitHub repo, but it's got all the steps that I'm a way to work through so you can take them away and work them if you need to. So with that in mind, let's jump to demos. So just a couple of quick notes about the demo that I'm a way to show you. I didn't want to rely on the internet connection at NDC, although I was told it was really good. I didn't want to rely on it. So what I've done is I've used some of the things I spoke about in terms of the configuration and I'm running cake in essentially a offline mode. It doesn't require an internet connection in order to run, so I've used some configuration options to make it run completely locally. For instance, I'm using Nexus repository, so I've got a local NuGet repository for all the packages that I'm a way to pull in. I don't need to go out to NuGet.org. But what I will say is that all of the commands that I'm a way to run and all of the scripts that I'm a way to show you will work in that obviously online environment. I'm just running them locally just so that I didn't have to worry about the demo gods and making it not work. So where we're going to start is if we go back to our demonstration here, is we're going to start the package restore step. So why we need to do that is that if we jump to Visual Studio here, what I've got here is quite simply the most amazing project you're ever going to see. It's not really. It's literally the standard ASP.net web application template that comes with Visual Studio out of the box. All I've done is slightly customized it to make it so that I've got a test project that's doing a few tests against some of the home controller. And also I've made it so that there's a third party library in there. This cake.sample.com, that's a third party library. Or a common library that's shared between the web application project, simply so that I can show how we're going to do some new get packaging. I'm going to package up that sample.com and enter new get packages part of the build process. The reason that Resharper and Visual Studio are literally screaming at me right now is because I've made it so that none of those new get packages have been restored. So it's running essentially with all the references not there. So both Visual Studio and Resharper are saying to me, you need to fix this. But again, we're not in Visual Studio at this point. We're going to do it all from the command line using our cake script. And we're going to make that build work effectively. But I just wanted to show you what we're actually building. So the tool that I use for what I'm doing, my build script, this is VS Code. What we've actually got in here is cake has a VS Code extension, which means we've added in some custom functionality to VS Code specifically for cake. One of those is these commands. So if I go up to here, hopefully you can all see that, if I type cake here, what we'll see is that there's two custom commands in there. One is to install what we refer to as the cake bootstrapper. And the other is to install a configuration file. The configuration file is what I spoke about before. You'll see that I've actually got one in my project here already. That's simply controlling where cake gets some of its core components from. So again, I've made that local, so it runs locally against my local host. But that typically wouldn't be required unless you wanted to change that functionality. So what I'm going to do here is I'm going to type cake again, and I'm going to install the bootstrapper. So what it's asking me for is whether I want a bootstrapper for Windows or whether I want a bootstrapper for Linux or OS X. In my case, I'm on Windows, so I'm going to go ahead and select that option. So what we're going to get there is a notification to say that a bootstrapper file has been downloaded. And as you'll see over here, we've now got a build.ps1 file. So this build.ps1 file is the one that we give you to get started with. It's fully expected that this bootstrapper file will need to be customized by yourselves because you'll want to do things as part of your project setup that we don't. But this is our best attempt or our best suggestion as to how to get started with it. Excuse me. So I'm not going to go through this in too much detail, but just so you understand what's going on here, this bootstrapper is going to ensure that we've got the dependencies in order for cake to function. So we need to download the cake.exe. So for instance, you'll see in here, where do we download cake? So once we've got the cake.exe installed, which we do as part of the tool resolution, we'll have that cake.exe ready to be executed. So this is really just a setup stage. Once you've got this configured, you'll never likely need to touch it again. And it's the exact same thing for the bash version. But all we're giving you here is the bash or the Linux or OSX equivalent of doing that exact same thing. So the reason that we have two is obviously if we're running the same build.cake file on all of these different environments, we need some way of kicking that off. So if we're running our build on appvair, which is a Windows CI system, we'll target the build.ps1 file. And if we're running on something on Travis, we'll target the build.sh file so that it does the bootstrapping for us. Once we've got that bootstrapper in place, what we'll do is we'll just close that. No, we don't want to save it. We'll just add in a straight up build.cake file. So we're going to add a build.cake file. So this is obviously going to be empty when you first start. So what you want to do is you want to start populating this with the tasks. Those directed asexical graph that I spoke about, you want to start stubbing out those tasks as part of the build process. So as you saw there, my typing isn't very great when I'm standing up here. So I've got a number of code snippets that will help me enter that into here. So I'm just going to start doing that. So what you've got here is kind of the most basic build.cake file that you can kind of start with. So what we've got here at the very top in line one is I'm accepting arguments from the command line. So this is Cakes way of saying find an argument called target from the command line. If there isn't one, default that to the default target. So if you wanted to call cake with a target of say, you get package restore, then that would be the task that it would target as part of the entry point of the build system. At the bottom there, we've got between lines nine and 10, we've got the definition of that default task. And all we've got there is a dependency and is dependent on to say what should run when that default task is started. So in this case, what's going to happen? If we refer back to here, we want as part of the default build, we want that package restore to be to run. So that's what's happening here. We're going to say pass in the, or rather we're not going to pass in a target. It's going to run the default and then it's going to run the new get package restore step. Now if we just go ahead and save that. Now what I could do, and this is another reason to start using VS Code, is that with the latest release of VS Code, there is now a terminal and interactive terminal within here. So what I can do is I can actually just say run that build up PS1 file and you can see the build executing there. I'm not going to use that because it's kind of difficult to show you that and also edit the code at the same time. So I'm going to just use a straight up PowerShell window here. So I'm going to run that same build. So what you'll see here is that actually it's gone through the build process and it's output that it was successful, green obviously successful. But we haven't actually done anything yet. All it's done is it's been insured. So that bootstrapper has insured, if we jump to this folder over here, that bootstrapper has insured within this tools folder, which is a convention that we start with. But again, that's configurable. It ensures that the cake.exe has been downloaded. So that bootstrapper went off to new get. It downloaded the cake.exe and made it available in this tools project, in this tools folder. And then it's executed the cake.exe, passing in the build.cake again, a convention, and then it's executed the build. So we haven't actually done anything yet. What we have done is we've started that process of making that all work. So if we just run that again, obviously it doesn't need to do that, some of those bootstrapping steps anymore. So the build is going to be that bit quicker this time. But just understand that that's already happened for us. So what we want to do now is we actually want to do something within this new get package restore step. We want to make a call out to newget.exe. We want to pass in the location of the solution file, in this case the one that I've got up in a Visual Studio. And we want to do a new get package restore. So for those of you who have done that at the command line, we all know it's just newget, restore, pass it in. But what we're trying to do here with cake is we're trying to get to a point where we're not worrying about the actual command line arguments. We want a type safe way to make that happen. We want to do it in a way that we don't have to remember all of the specific command line arguments. We want to have it just so it's, again, it just works. So what I'm going to do here is I'm going to go to my second demonstration here. So this is what it looks like. So we've got a command alias, a method alias for newget restore. And all I'm doing is it's accepting an input parameter, which is the location of that solution file. And that's literally it. That's all we need to do to kick that part of the process off. So to show that there's no kind of smoke and mirrors going on, I'm going to go to the source folder. Notice that there's not a packages folder at this point. But if I go over here and run that build, what it's going to do, you're going to see this wall of text. And that wall of text is the 47 packages that the default ASP.NET template needs in order to build. So what that's done now is it's taken all of those newget packages, again, in my case, from my local newget feed, and put them into the system. So they're now there available to us. OK? If I run the build again, we're not going to see that big wall of text anymore because those packages have already been restored. So again, incrementally, the build is going to get quicker. But again, if I were to remove that packages folder and make it a little bit harder for the build system, then run the build again, then we are going to see that big wall of text because we need to restore those packages. So again, we're scripting it out. We're making it so that one person doesn't have to remember to do all these steps. We do that as part of our build process. Now, don't get me wrong. This happens for you automatically within Visual Studio when you do that control shift B to do it within Visual Studio. But we're stepping away from that environment, and we need to take ownership of those particular tasks. OK? So if we jump back to here, what we're going to do now is we want to actually run a build now. We want to build that solution and look at the output. So again, if we go back over here and have a quick look in here, then there's no bin folder. This project hasn't been built yet. So we want to build it as part of our script. So what we're going to go, we're going to go back to here, and we're going to go at the top here. We'll start demo two. So what I'm going to do is I'm going to add in an additional argument up here because as you know, within Visual Studio, you can specify whether you're doing a debug build or you're doing a release build. So the same process or the same options, we need to come through cake. We need to specify whether we're doing a debug build or we're doing a release build. And we can take that as an input argument. So again, here, I'm going to look for an input argument called configuration. And if there's not one, I'm going to default to it being a release build. Okay. With that in place, I should be able to come down here and do demo two. And I'm going to do the exact same thing as I did before. So that's just a method alias again for MS build. And all I'm doing is I'm passing in the solution, the location of the solution file. Now some of you might be thinking, well, that's all well and good, but I need to be able to set the treat as warnings error as part of my build. Or I need to specify the specific MS build version that I want to use. So that's where the configuration options within each method alias comes from. So if I rewrite that as this, then what we're seeing here using this MS build settings class, I'm just newing up a new instance of that settings class. And I'm going to set multiple properties, some with extension methods. But I'm basically saying, I'm on Windows. I want to treat warnings as errors. And I'm using that specific version of the MS build tool. And I want to set verbosity to minimum and so on. So all of these things are available to be set. And what I would direct you to for more information on that is the help documentation that exists on cakebuild.net. So what you're seeing here within the MS build alias is there's an example there of what you can do. And then if we jump over here on the MS build settings page, we'll see all of these properties that we can set on MS build. So all of those are then directly translating to command line arguments that are passed to the underlying MS build.exe as part of the build. So with that in place, what I need to do before I forget is I need to change my default task to now take a dependency on the build task. Because as part of that graph that we're building up, we're now wanting to run the build and then the build takes a dependency on the new get package restore. And that's what this line is doing. Now if we jump back over here and have that folder open, as an example, we're going to run the build. It's not going to do that in the get package restore now because we don't need to, but it's going to do the actual build step. So you see here it's doing the build and then we'll see we've now got a bin folder. So it's now actually running the build. So if we let that finish out, we've now got the artifacts of that build are now within here. It's compiled all the exes and the DLLs that go with that project. Now the eagle eye amongst you or the concerned amongst you might be thinking, well, in this bold new world of.NET Core and ASP.NET Core, we haven't quite yet got an MS build core. There is one, but it's not quite there yet in terms of what we need. So the immediate question is how do I run this on Travis? Because you're calling MS build. Travis doesn't work on MS or MS build doesn't work on Travis. So that's where the flexibility of Kate starts to come in again. As we use this example, what we're showing here is that within this build step, I might want to have a simple if statement. That simple if statement says, if I'm running on Unix, so either Mac or OS X, I want to use the X build tool rather than the MS build tool. The X build will work on those platforms. So obviously Kate can't do everything for you. It's not going to know which tools run on which environment. So there is going to be some element within your build script of understanding the build pipeline, understanding which tools run on which environment, and then taking the necessary steps to ensure that they run in the correct instances. So what I'm showing here is with the addition of the is running on Unix method, we can then make an informed decision about which tools we want to run. And it's the same thing. You can do it in the reverse. There is running on Windows. And there are other steps you can take in terms of ensuring that one particular build task runs on those platforms as and when required. So if we just leave that in there, save it, and then run it again, the exact same thing should happen because I'm still on Windows, so I'm still running that same MS build task and it completes as required. So moving on, the next thing we want to do is we want to run some unit tests. So we want to run some unit tests. So the unit test runner that I'm going to use is, I think it was X unit. Let me just check that. So I've got some unit tests in my project. So I need an X unit task. So this is showing, again, we've got an X unit to alias. It takes a number of arguments. So those of you who are familiar with X unit will be familiar with these things. So here I'm just setting a couple of things like the output directory, what kind of report I want to be generated, and where I can find the DLLs, the test DLLs that I actually wanted to execute. So what I've got here is I need to say that I now want to run this one as part of the build. So I'm going to change that over there. So as part of that graph, X, the unit test is going to run the build and the build is going to run the new get package restore. So if I go ahead and run that, I'm going to get a failure. And I hope I'm going to get a failure. I should get a failure. Yes, I got a failure. Now that's expected. So everything that I've showed you so far, the new get package restore and the MS build, the tools, the underlying tools for both of those aliases are already on my machine because the bootstrapper downloaded new get.exe for me. And I have Visual Studio installed. So I will already have MS build installed. X unit I don't have installed. So this is where the preprocessor directives come into play. I need my cake build process to grab that tool and make it available for me as part of the build. So if I go over here, I'm going to use the tool resolution syntax. So what that's saying is a hashtag tool, new get. And then so that's specifying that the tool comes from new get. And that's where I can find it. Now again, because I'm running locally, I want to have a slightly different syntax. I'm going to use this. So this is just showing kind of the flexibility of that URI syntax that we're using there. I want to have that package come from somewhere else. The default will be for it to come from new get.org. But I want it to come from my local repository. So I've changed that syntax up slightly. And if I wanted to have a specific version of that tool or to use a pre-release tool, then again, there's additional arguments that you can put onto the end of it to make that happen. So with that in place, if I run the build again, this time it will download the tool that is in unit. So we'll have the X unit tool console runner is now available in my tools folder. So Kate knows about it now, and it's now attempted to run the unit tests. But again, it's failed. And it's failed. If you were to read the stack trace here and do some Googling as I needed to when I first tried to do this demo, you'll realize that as part of the output or the running of that X unit tool, we specified that the output directory should be a folder called dot build slash test results. The X unit runner actually assumes that that folder exists. It needs to exist on the file system in order for it to output the directory there, the output file there. So what we'll see is if we go up here, we don't have a dot build folder and we don't have a dot build test results folder. So it's therefore failed. So again, we need to take ownership of that as part of our process. So if I go up here and add in a clean task, and I, so all that's doing is again, it's using a method that we have available. So all it's doing is saying for every folder that I put into this clean directories method alias, make sure that two things happen. One that that folder exists and make sure that it's empty. Because again, it's part of the build process. We want to make sure that there are no artifacts left as part of the build. So with that in place, what I need to do is now my X unit test is dependent on two things. So I'm just going to say it is dependent on the clean task. And if I go back to here, I go back to here, what we should get this time is it'll run through the build, it'll run X unit, it'll create that build folder as part of the build, and it'll put the output of the test results into there. So what we'll see in here is I've got three tests. And because I'm an amazing developer, all three tests passed, and I can go off, go home knowing that my build works. So that's that part of the process. So we go back to here, the last step that we're going to cover is the new get packaging. So as I discussed in my Visual Studio project here, I've got this cake.sample.com in DLL. I want to wrap up as a new get package, because I want to share that within my project teams. So I want to do that using the new get package step. So if I go back down to here, I've got a new package step. And what I've got here, and you might be immediately thinking, Gary's gone nuts. What is all this stuff? Because this is meant to be really easy. And what I'm trying to highlight here is that, yes, it is really easy. There is just a new get package. There's a new get pack alias that will just take as an input the new spec file that you want to package. But what I'm showing here is that there's complete flexibility in terms of how cake operates is that if you don't want to have that new spec file, or you know that that new spec file needs to be created each and every time that you're running the build process, you can have cake specify all of those properties as part of the build, and then pass all of those new get pack settings into that new get pack, and then just have it create the build. So what you won't find within my demo project, if I go up to here and look at the source and look at the cake comment, then there is no new spec file in there. So I'm fully running the packaging of that new get package from the build process. It doesn't rely on the underlying system. But what I'm just trying to show here is the flexibility and what you can get it to achieve. So what we'll need to do here, again, I'm not going to make the same mistake I did before. I'm specifying that the output directory of that new get package should be a new get folder within the build output folder. So I'm going to go up here and I'm going to change my clean task to include, if I can type demo, I'm going to change it to extend that clean task to include another folder, which is the build new get folder. And I'm going to go down to the bottom here to specify that the default task now takes a dependency on package. So if I go over here and show the output folder, which is build, I go back over here and run the build. And what we're going to do is going to do all the steps that we did before. It's still going to do the new get package restore if required. It's going to do the build. It's going to do the X unit test. And it's also going to package up and into this new get folder. I've now got an up keg, as they're called. And if I open that bad boy up, what we're going to see in here is it's got the version number that I specified. It's got the release notes that I specified. It's got the copyright that I specified. So all of those things came from the build script. OK. Now the final demo that I wanted to show before we go back to the slides is, so what I've showed so far is, sorry, question. What do you have to do with the build number into the folders? Like if you keep up two builds at the same time and you clean the same test result folder as well. What is that count? So the question is, why do I not specify a version number into the build output so that it segregates in that way? Or you certainly can do that. I mean, typically I use, when I'm doing a build, I use a tool called git version that asserts the semantic version based on the commit history into the repository. So I'm always asserting that. If you wanted to, I mean, keg actually does that. So if you actually look at the keg build script within our build output folder, we do take a version number and append that into the build output folder. So you certainly can do that. Again, it's flexible. I mean, you can do anything you want. So if you're interested in that, take a look at the build.keg file on keg itself and you'll see that in its artifact folder, we do include the version number so we can segregate in that way. So you certainly can do that, yeah. I mean, the question I would ask is why we were running two builds in the same machine at the same time. That would be another question. They're checking in. So the question is what happens if there's kind of two builds running at the same time? On any sort of typical CI server, the build agent is only running one build that's in the queue at one time, so your continuous integration server would take care of that. Unless you've got multiple agents, at that point, the files are all local to that agent. So the queuing system within the CI system would take care of that. We can talk about some more afterwards if you've still got some questions. So what I wanted to show was as part of this demo, what I showed was that as you're creating the build script, you might run into issues, you might run into issues with the tools that are missing, or you might run into issues with syntax, or you might run into issues where something just isn't working as you would expect it to. So what I would say is that as you move these builds onto some sort of CI system, you will want to have some sort of logging and diagnostics in place so that when you're running on something like Team City or Jenkins or Travis, you can look at the build log and see what's going on. So what I would like to show you is if we have recently added in the ability to do debugging of these cake scripts, so because it's just a C-sharp script, we can have that within Visual Studio and step through the build using Visual Studio. So when you're first creating the build, it becomes easier to figure out what's going on. So what I'm going to do is I'm going to go up to here into my new package of store step, and I'm just going to use another one of the pre-processor directives that we've included, which is just the break. So what this is going to say is run when you see that, if there is a debugger attached, then stop at that point in the cake build. So I'm going to jump to here, and I'm going to run cake directly this time. I'm not going to use the bootstrapper. I'm going to use cake directly. I'm going to pass in that build.cake file, and I'm going to pass in debug. So what that's going to do is it's going to immediately say, attach a debugger to the process it's just started. So it's instantiated cake. It's passed in or it's created a process to go with that. So if I jump over here to Visual Studio, and I go debug, attach the process, and the process ID there was 4628, so it's that one. If I go ahead and attach to that, what it's going to do, it's going to attach the debugger to that process that's been instantiated. It's going to load up that build.cake file, and it's going to step on that preprocessor directive that is the break one. So what you're looking at is it's still saying break, but in the compiled version, it's got the system.debug.debugger attached, or the launch debugger. So at this point, I can just F10 through the build. So if I go back to here, what you'll see here is that you're still seeing the output as it's happening, but we're stepping through it within Visual Studio. If I step through this again and do the actual build, so we've got that if statement there, so we know that we're running on Windows, so we're going to run MSBuild. Jump back to the PowerShell output, we see the build happening, but you've got the option within Visual Studio to step into that. So the prime example of that is if I go back down to this one, I set a break point here, and I let that run until we get to the new get package step. I can set a very quick watch on this. And then in here, I've got access to all of those properties that I've set. So again, when you're first creating the build script or you're trying to debug something going wrong with the build script, so it's going to not quite work in the way you want it to, then you have the option of stepping into it with the debugger within Visual Studio. Okay, that's one of the benefits you get for free because it's C-sharp. Okay, so I'm just going to jump back to my slides here. I should then have control-F5 there. I'll do that again. I've lost that one. I'll go back to my demo slide here. Okay, so the last thing I wanted to talk about before I take any questions if there are any is I wanted to announce some news associated with cake. So as of today, cake is now a member of the.NET Foundation. So those of you who are not familiar with the.NET Foundation, it is an independent organization that's trying to foster open development in the.NET ecosystem. It's got projects already in it, for instance, Identity Server, MVVM Lite,.NET Core and ASP.NET Core. So we're very happy to be joining that foundation because we think it will obviously bolster the cake profile. People will hopefully become more familiar with what cake is and how, what it does. The blog post that I've linked to there might not quite be up yet because it's getting, they're tying it in with this talk that's happening here, but it will be up there at some point today. So we're, I guess, we're very happy to say that we're now a member of the.NET Foundation. So we've got about five, ten minutes left. So if there are any questions, I'll take them just now, if anyone has any. No questions? Yes? Could you say that you could throw out your old build servers just using this? Could I just throw out my old build server or build system? From a purely personal standpoint, sure, why not? I mean, it's, depending on what build system you're using, I mean, depending on, so when we say, if you're using, well, okay, take it as a, what are you using just now then? So cake is not going to replace Team City. Team City is the CI server, but within Team City, what you will typically do is you will have build steps. So within your Team City, the CI server, you'll have steps within Team City. And that steps may be the exact same steps that I showed here. So you might have a step that's doing the New York Package Restore. You have a step that's doing the MS build. You'll have a step that's doing the unit testing. None of that goes away. It's just that you've moved that into the build.cake file. So what you're effectively saying is to Team City, you'll then have one step within Team City. And within Team City, all you're going to do is run the build.ps1 file. You're going to run that bootstrap file, and it's going to take care of all those steps. Because simply put, I love Team City. That's what I use. I'm not going to throw Team City away because I need and I want that because it's still orchestrating the build queue for me. But what I'm doing is I'm taking some of the weight away from Team City, and then just having it all contained within my one script. So the benefit of that is when you go fully cross-platform, I want the same build to run on Travis. So that's where the bootstrapper comes in. You'll have the bootstrapper in the build.sh that will run on Travis, which is running on Linux or OSX, but it's still running the same build.cake file. So it opens up that option to run the exact same build without having to recreate each of those steps within those CI servers. So no, cake is in no way shape or form trying to replace Team City because Team City does a great job of what it does, and it does have those build steps that will do some of the work that we're showing here. But it does mean that you have to replicate those steps on Travis or Jenkins or Bamboo or whichever other CI systems you want to run. By putting it into the build.cake file, that commonality comes in so you can run the same thing across all of those platforms. So no, don't throw away Team City. You definitely want to keep using Team City. Any other questions? OK. If not, there's a couple of resources there. Just pointing you to the cake documentation. There's a recent podcast that we did on the MS Dev Show, and we've got a couple of blog posts out there that just show how you get started with cake. So I'd encourage you to go and take a look at those. But if that's it, that's all I have to say. So thank you very much, and thank you for attending. There's also ones for Chocolate that I'm also involved in, so feel free to go and grab some stickers of you once I'm.
Have you ever wanted to create a build script for your application, but been faced with learning a new language, or DSL, or writing more XML than any person should ever need to? Have you ever wanted to create a build script that will work cross platform? Have you ever wanted to create a build script that has first class support for the most common build tools like XUnit, WiX, SignTool, and many others? Have you ever wanted to create a build script that uses a language that you already know, and love? If you have answered yes to any of these questions, then the Cake (cakebuild.net/) Build Automation System is for you!In this session we will start with a standard .Net Solution and incrementally add a build and orchestration script to compile the application, run unit tests, perform static analysis, package the application, and more, using the C# skills that you may already have.
10.5446/51781 (DOI)
Hi everyone, I'm Philipp, I'm from Austria Vienna. That's why I'm sticking to English so you don't have to deal with my Austrian accent. But otherwise, if you have any questions in chat, if you're asking German, it's fine as well. I will try to keep an eye on everything. Ask in English, ask in German, I'll try to respond to whatever is going on. So let's dive into SecComp, the next layer, or maybe an additional layer of security for your applications. So security is oftentimes this approach where we say everything is fine, and maybe we know in the back of our hat that not everything is perfectly fine, but we'll just assume that everything is fine. Until something happens, and something might be some bad exploit, or somebody starts mining a Bitcoin on your AWS instances, or whatever bad happens. And at that point, you then realize that, well, nothing is fine anymore, everything is on fire and terrible. And we don't want to get to this terrible point. So we want to figure out how we can avoid getting to this bad place, basically. So obviously, there are no silver bullets. Unless somebody is trying to set you something, then they probably have some silver bullets. But there's this saying that you should be like a werewolf, you should be very afraid of those. So don't let anybody tell you that there is a single solution to fix all of your problems, but there are many different things you can do to improve your situation. And that improvement is what I want to cover today. So the main principle that we follow here, and hopefully we follow all in many other places, is the principle of these privileges. So why have some privileges if you don't really need those? And SecComp is very much playing into that area or that thing. So SecComp in general is preventing the execution of certain system calls by an application. So let's assume you have a remote code execution vulnerability in your code, but you're using SecComp and you're not allowing certain actions. So even if somebody can exploit your application, if your application cannot do certain things like fork another process, or maybe your system never needs to do a network call, then if you have dropped those permissions or privileges, then even if somebody can break it into your application, they still cannot do these things because your application can just never reach those. So it's just one more layer to protect against things that you never need to have or do. And it can either abort the system call or it can kill the entire process. So if somebody breaks into your application and tries to exploit something, it might just kill the process. And then you might see that something is wrong and can't react to that. So SecComp in general is like an application set box. It's really on the application layer. So the application registers its SecComp profile to then limit or drop the privileges that it doesn't need, basically. And this is really the gist or the idea of it. It was initially added in the Linux kernel in a very, very old version in like, well, 15 years ago or so. But it was only the first step of what we had back then. So you could, in the process, the process that the SecComp, you could set that one and put an application into strict mode. And strict mode is really strict as the name implies. So it would only allow read, write, exit, or a secret turn commands. So you cannot open a network connection. You couldn't even dynamically allocate memory with malloc, for example, because all of those system calls would not be allowed, which makes this very secure, but pretty much useless for most real world applications. Or you couldn't even open a new file if the file handle wasn't already accessible yet to read or write to a file. So all of that didn't make it very popular because, well, it was very tightly confined, but probably too tightly confined for real world application. It was more like, OK, this is possible, and we need to evolve this further, which then happened in 2012 when it was properly added for system call control in the kernel. And then a little later on, it actually got the name SecComp for easier configuration of the SecComp profiles. So this is kind of the history of how we got to where we are today. So if you want to see the system calls that you have in your system, and I assume everybody has seen those just for the sake of completeness. So in MEMSys calls, and I will need to scroll a bit further down here, you have all these different system calls. So whatever you can do here, or which you might require, all of these could be allowed or dropped. So just to get the idea that there are a lot of them by now. They are, by the way, also platform dependent, so you might need to do a platform check because the system call number might be different on different platforms. And this is what you can then allow or drop as you go along. And then you can use SecComp as the easy interface to actually interact with those system calls and allow or deny them. So if you run SecComp, this shows you the right habits, how to interact with the SecComp profile, and then you can either set the strict mode, or you can actually set the filter mode which uses Berkeley packet filter BPF. Are you already using BPF, or where might you already be using BPF? Because probably, I'll try to keep an eye on the chat, probably you are using BPF already somewhere to filter something. Probably when you use TCP dump to, well, see what is happening on your network. Yes, those are normally the desperate days when you need to reach for TCP dump, but this is also where you write in BPF to actually find the relevant traffic. So this is how you can use SecComp. And you just need to register that BPF filter the strict mode, like I said, is probably overly strict for a real world application. So you will write in BPF what is allowed and what is not allowed in your application. And just to show you what the minimal setup would be, you need to include the right headers. And then that BPF proc, that contains the filters to your system calls that will be allowed or denied. And then what we are doing here is first, I'm validating the architecture. So I'm, because if you have like a 32 or 64 bit architecture, those might be different. And we don't want to emulate one or the other. We want to make sure that we check for the right architecture. Then we check the system calls. And then we have a list of system calls that we will allow, and everything else will be denied. So only what is on the allow list will pass through and everything else will be removed. The approach with the allow list probably makes more sense because new system calls are being added over time. So you don't want to be caught off guard by having new capabilities that you accidentally then allow. Rather, you would want to wipe list or allow list what you want to have in your system. And everything that you don't know, you also stop to be on the same side, which of course will need much more maintenance. But that is the trade off here. The kind of like simple approach would be just to disallow some calls that you know that you don't need. But of course, it is never as complete in terms of protection as with the allow approach. What you need to keep in mind is when you register the profile, every system call of that application actually gets checked against that Seccom filter then, which sounds very expensive, but it's not that expensive because all of that is running in kernel space. So you don't have to reach over to user space to for every call, but all of that is being just done in kernel space. And maybe I should have mentioned that earlier. Obviously, I'm only talking about Linux here. Sec has a kind of similar concept that from what I've heard is a bit bugger and not as widely used. Windows has something remotely similar, though I haven't touched Windows in many years. So I cannot tell you too much about that. So we will be focusing on Linux and the Linux kernel here. And there the Seccom filter is running in kernel space. And the filter results that you can get are the call can be allowed, the process of the thread can be killed, or an error is returned to the caller and it is being locked. And we will see later on the being locked of how you can actually see Seccom violations in your applications or on your hosts. So is anybody using Seccom already? And actually quite a lot of applications do. This is not a complete list, but probably you're using those on a pretty daily basis. So from Chrome to Firefox, OpenSSH, Docker, which will look a bit into system D, Firecracker, many other systems use Seccom filters for actual or additional security. On the other hand, unfortunately, many other programs don't, but those who are more security aware, many of those are already shipping Seccom profiles with their application because it's always the application sandbox driven by the application itself. So how you could add your application to that list is you would need to ship a Seccom profile filter with your application. And I'll show you some examples how you could do that then, for example, from your own Java application or from a Go application. So Docker, since I assume many of you are using Docker, it's trying to have like some same defaults for what makes sense security wise, what is allowed and what is not allowed. It does disable around 44 system calls out of the 300 plus that are available. And you can actually find that here in the default JSON, you will find what is allowed. And if I'm not mistaken, they also use an allow list where they have all the system calls that they do allow are listed in that JSON file. Some of the system calls that Docker is blocking is clock set time, for example, because the time of your computer is not namespace. So if the container would try to change it, it would change the time on the host as well, which you probably don't want. You cannot clone a namespace. You cannot reboot the host. You cannot share a change in namespace. All of those are things that you normally don't need or don't want to have. And that's why they are forbidden with system calls or seccomp filters. You can run Docker without the default seccomp profile, but this is definitely not recommended. Do that at your own risk. But for example, if you pass the parameter security of seccomp unconfined, it would skip all the default seccomp filters. And then, for example, you could just run something as root directly here, whereas with the default seccomp profile filters, running the who am I on the map user root would fail because you would not have the capability to unshare this command. So this is where seccomp is coming into play here. By the way, if you're using that capability add, for example, if you add whatever capability for networking, for example, or so to your Docker container that adds both the capability and what is not really explicit in the name, it also allows the right system call if it would be blocked otherwise. So you don't need to manually change anything with the system calls, but that capability add will take care of that for you transparently as well. Just when you might be confused why some calls are allowed or not allowed, that capability add takes care of the seccomp filter for you as well. Yes, I see fire jail mentioned in the chat. We'll get to fire jail a little later. Fire jail is very handy to actually try out seccomp filters. And this is like the hello world of seccomp filters and fire jail is definitely the right tool for that thing there. Is any of your applications using seccomp filters already to figure that out? What you want to do is let me change to my console again. We want to grab for seccomp in all the processes that we have running. And then you can see zero means, well, it's not using seccomp. One would be this fixed mode and two is it's using the BPF filter for a seccomp profile. So for example, we could just take a look. I don't know. Let's take a look at this process here. Had proc status. Okay. This seems to be system B that is using seccomp or let's take a look at something else. So it's a surprise for me as well which process we're getting here. Okay. This is another system D login. Let's see. Maybe we're lucky here to see something other than system B, but that's also starting point to see that this is using seccomp filters. Okay. Heartbeat. Heartbeat is the elastic heartbeat, which is basically like a pinger to check if the system is up or down. For example, I work for elastic. I'll get into debt in a moment. Our products all use seccomp. That's also why I'm talking about that because we have invested quite some time to make seccomp available on most of our products. And well, the beats that we have, those all use seccomp filters like heartbeat or the other beats. Okay. So this is what we've just seen. This would be one example for, okay, I had system D network here as well. So you have all the examples in the slides, but you can just check on your system what is actually using seccomp filters and what is it. Okay. So why am I talking about all of that? As I mentioned, I work for elastic, the company behind elastic search, logstash, Kibana, beats. I'm officially a developer advocate. So I mostly talk about the good stuff that we do or I try to understand how stuff works and why it breaks so I can help others to see why their systems might not be behaving the way they do. This is what we normally do and most people probably know that. Does anybody know which of our products use seccomp profiles and which don't? So elastic search does, logstash and Kibana so far don't. I think Kibana has an open issue for seccomp profiles with node, which seems to be slightly trickier. Logstash is JRuby and is not using that. And we add another component more recently or it's been a couple of years by now as well called beats for like lightweight agents or shipwrecks, which also have the reason that since we have now a bee in elk and there is no bee in elk so far, we try to come up with something new and it looks something like this. So you can see at first it was an elk and then it developed into this, which we call the elk or elk bee. As you can see, it has horns and is a bee. And beats is also using seccomp profiles, which basically leaves us in our stack here. We have elastic search that stores the data. So this is secured by seccomp profiles and beats, which are the agents that you roll out on many of or all of your hosts to collect logs or get metrics or check if systems are up or collect security events. Those are also protected by seccomp, which kind of makes sense. The thing that keeps you data should be well protected and the agents that you roll out across your hosts also are well protected. Data and logs are still on the tool list to add seccomp profiles. So how are you doing that with seccomp profiles in a Java process? Because elastic search is a Java process. So what you're using in Java is JNA, the Java native access. And with that Java native access, you can actually influence or change what is being allowed down to the seccomp profiles. And I've linked the right piece in the current release in the source code here. So I'll show you some pieces of source code, but I've also always linked to the right pieces. So if you have a Java application and want to do something similar, this is where you can just copy the code from. So I hope that is not too small for you to see. Might be slightly small, but sure. I'll try to explain what is going on here. So the first thing that we are doing here is we have a check running as root. This is not seccomp specific. I'm just mentioning this here because it's another kind of like brick or layer in that wall of protection is if you try to run elastic search as root on Linux because Windows doesn't have the same concept, then the process will throw a runtime exception and system exit because you should not run any server processes as root. And the process will just fade. This is not seccomp, but this is just another layer here. And after that, if we are not running as root, then we actually install the system called filters. And those then look like this. And if this is too small, this just shows that if you're on Linux, then you are calling the Linux seccomp.pro file implementation. If we're on a Mac, we're calling that one. We have a check for SunOS and FreeBSD even though they are not officially supported anymore. And we also have some check for that on Windows. And if the operating system is anything else that we don't support, it would also be immediately. And then, yeah, like I said, on other operating systems like on Mac or on Windows, there are similar concepts, but I would just focus on seccomp profiles here. So we'll skip over the other operating systems. So here on Linux, we check the architecture. And then, for example, we limit that you cannot fork the process. So even if we had a remote code execution in Elasticsearch, which I hope we don't, even if you could run arbitrary code, you could not fork out another process or you could not execute another binary because we never do that. So we drop the capabilities for that since we never need those permissions, we'll just not allow the process and drop those. Heading over to Beats, which are written in Go. We have written our own seccomp, BPF, DSL, or NYSER syntax in Go, which is an open source project which you can use in your own Go binaries. What that looks like here, you can define your rules in Gamel. If that is a good or bad idea, depends a bit on your case around Gamel. Some people will say this is the only right way to do stuff, probably those who use a lot of Kubernetes. Many others will say, well, if I could have it in code, I would rather do it in code. The library that we have built here is doing it in Gamel. So it is what it is. You can set a default action, which here is allow. So by default, it will allow everything. By the way, these are not the actual rules that we have in Beats. Because in Beats, the default action would be denied. So there we also need to add the capabilities that we actually need because it's the more secure approach. This is just an example of how you could use that library. But this is not what the Beats do. And here, what we allow is, sorry, no, what we drop here is these permissions. So connect except send to all of those are dropped, which again, for Beats, it's a shipper over the network. It doesn't make much sense. If it cannot connect to anything, then it's probably not going to be a very useful shipper. But this is just an example. For the actual rules, you can see those in this piece of code here. And you can see this is platform specific. So on Linux, on the 64-bit platform, we have a very long list of what is allowed. So what is allowed is accept and access and bind, et cetera. So lots of things that are allowed here. Really like I've said, two times, I think, already allow over deny. So since new system calls might be added over time, you don't want to be called off-guard by a new Linux version. It allows something that you don't want to have in your application. So if you only allow what you explicitly need, you will never be in or have that problem. So we get to fire jail in a moment and how to do that. So just to give you a bit of an idea of what we could do here. So again, this is like the Hello World application of everything. I'm using Netcat. I'm listening on port 1025. And I can check with myself. I'm just using KailhNet. This is a live instance. If you're quick, you might be able to send me a message as well. Please behave. So if I say hello, then I'm receiving the hello that I have sent here. And yeah, you can see I am here using the Austrian telecom. And this is from where this message has originated. I will use Netcat, not because Netcat is a great application that anybody would be using in production, but because Netcat is very nice to see, like, we will focus on the bind here. Netcat is binding to this port and then able to receive something. And we want to dive into different things that we can deny here and how you could even figure out what permissions or what capabilities it is needing. So we have started the chat. If you run Strace, you could then check the bind that it's using and then just run the general binary again. So if I run that just one more time here. Am I in the right process? Here we go. So I'm only interested in the bind because otherwise you would see a lot of system calls happening here. Okay, somebody tried to just connect, which is not me anymore. So here this was the bind command that we have tried. And you can see we have had the bind. So if we take away the bind, this system would not run afterwards. One other thing that is, by the way, pretty interesting is if you run Strace with the dash C command and let it run and then, for example, interact with it, it will at the end show you a collection of all the system calls it has been using. Because then once you try to add your own second filter, you will have the question of which system calls do I even need to allow here? And this might be one of the ways to do it here. So once I exit that, it shows me which system calls have been used. And you can see how many times they were used if there were any errors or not. And then to write the right second profile filter, you would just need to parse out this list of calls. Let's assume those are all the valid ones that you want to allow and you just put all of those on the allow list. And then your application should be able to run if you have kind of covered the full application spectrum and features. And of course, if you add new features that will need new capabilities, you will need to run this again to complete your list or extend your list of things that should be allowed. You can also do that programmatically, by the way. So if you want to have that programmatically, there are two projects that I found. One is just some C code that will run a syscall reporter that you can add to your C program and list those out at the end. And then there is another program or library that will help you figure out which system calls your application has been doing so you can actually see what you need. Okay, now getting to file jail that somebody already mentioned in chat. It can add second VPF sandboxes so we can take away capabilities from the process. So what we basically want to do is I'm running file jail. I don't have specified any profiles. So this you can just run. And I'm dropping the capability to bind. So if I run this, it will obviously fail to actually start the process because, well, it didn't have the capability and I just exited the process here. You can throw strace in the mix again, by the way. So if you run that, it will show you these are all the system calls that have happened. And then you can see, okay, here the bind is being called. And then you have the system call or the binary is caught, killed immediately as soon as you try to use that bind which you have forbidden. So this is how you can simulate like either you create a profile for your binary or you just list out the capabilities that you want to add or drop. You can try to run it and then you can figure out is this working or is this too strict or maybe it's still too lenient. And with strace, you can actually see these are all the system calls that happened before and what is happening right around when the application is then exiting, for example. So this is just a nice overview to see what is going on here. Okay. One question that comes up sometimes is how do you stop permission changes? Because let's assume you have a binary, you have a remote code execution. It runs whatever code you want inside your application. You have dropped some capability. Could they not just use the remote code execution to re-add the capability? So basically extend the privileges of the application again, which would totally circumvent the point of second fitters, right? So what would be the solution? There are probably multiple solutions to that or there are possible solutions how second could be doing that. Here, can you never change second profiles after you have set them up initially? Could you limit the changes to second profile filters or more the YOLO approach, don't care if somebody is smart enough to do that, it's tough luck. Now it's actually the second one. So you can set and you should set that. The known new privileges, basically that means when you set this, any privilege that you have dropped, you cannot take back again. So once you have disabled bind, even if you have a remote code execution, your binary could not add the capability to bind anymore because we have taken this known new privileges. You could further tighten down the rules afterwards. So at runtime, you could always add more kind of or disallow more system calls over time, but you could not re-add what the ones that you have already removed. We're doing that in Elasticsearch as well. So that's better hidden in a system call filter, but there the system call set known new privileges is number 38, which has been around since Linux 3.5. And we actually set that at the very beginning. So generally you set that at the beginning that whatever you drop after this point, you can never take back again. Then goes for beats where you, we just have since it has like this nice structure and it reps everything behind the scenes, you just said known new proofs to true, and then you can never retake the privileges that you don't need anymore. So we can limit down all the things and hopefully protect against that, which leaves us with one thing. How do we figure out what has gone wrong? Like if we have any second violations in our applications and for that, we can use another tool in the Linux tool chain, which is audit D, the Linux auditing demon, which can also react to certain activities that your applications are doing. And for that, we have wrapped that in one of the beats, which is called audit beat, which audit D audit beat kind of makes sense. We basically wrapped the output of audit D because the format of that is pain in the ass to Paris. We've wrapped that in a beat so that can ship it directly to elastic search. That is by the way, repped again in a go binary. So we have go leap audit, which wraps audit D. So we can use it in audit beat. Again, this is an open source library that you could then use in your own applications. And what this looks like, let me try to find my browser. I'm a browser. Sorry, my browser is on the other window. What we have here, let me quickly refresh. So here I have audit beat running and it's collecting all kinds of things on my system that might be security violations. So you can see in the last 30 minutes, we had a couple of thousand hits of things that were possible. But we're happening. What I want to do now is I have this event action violated second policy. And let's filter down to that event action. Violated second policy. That sounds good. And you can see in the last 30 minutes, those were my two fire jail calls where we had two second violations. Let me add one to show you what we have actually collected here. So you can see audit beat has collected the data. You can also see where this is running. So yeah, this is my instance on AWS where this is running. You can see the message type is it's coming from second. You can see which binary we were using. So it was netcat that we tried to run. You can see which was the primary actor. So the Ubuntu user is the user that I was using that one here. I've enriched those events with some additional information. So for example, you can see that this is running on my cloud provider, which is AWS. And you can see, yeah, details about the host, like the operating system. Maybe this is operating system specific that you had some security issue. You can see the process name and well, the user name and user ID. So with this information, you could figure out that, okay, there was a second profile violation, which binary was affected from which user has this been started. Maybe which instance on Amazon this is using or which IP address is affected or which base image is affected. And then you could figure out like, is this a real problem or don't I care that much about it? And by the way, we have also a bit of a nicer interview for that, which is called SIEM, the security information and event management, where you can see I have here all my hosts, which is a single host and we have logins and whatever. What I'm interested in now is events, which will probably have quite a few events in here. And with all the screen sharing, my CPU is struggling a bit, but you can see here, you can see what is happening on that host. Since this is all based on elastic search, which is pretty good with search. I'm just searching for second, because that's what I'm interested in. And now it's showing me violated second policy. So this is just a full text search over everything. And now I could say, for example, oh, I'm interested just in that binary or just in that host or just in that user. Let's say I'm interested in that user. So I'm taking that user, dragging and dropping it here. And then I could just see in that timeline, what has that specific user been up to? And then you can see, okay, here with S-Trace, program crashed, program crashed. Here we have second process violation. So this user is up to something weird. Or you can see here, I was executing that cat. This was when it was actually successful when I just was opening board 1,025 and we could check with it. So that one was actually successful. Whereas the other ones were not so successful on the currency crashed or second violation. So this way you could see this is what the user is up to. And then we could either isolate the user or take down the host that is affected or whatever. So this is just building on top all around what second profiles can already provide to you. So to wrap up, I'm always comparing this a bit to one of the many bricks in the wall of your security setup. And second is one very handy tool. So if your application doesn't need specific permissions, why give it to the application first place? Just drop them and you're good to go and you don't need to worry that somebody will execute a binary or fork your process because you know that you will never do that. One question that I see that comes up every now and then is second versus SC Linux or App Armor. Both of them are similar that they are doing kernel level filtering or inception of system cores where they're different is that second is actively set by the process itself. Whereas SC Linux or App Armor are mandated by, well, the system on the whole system level. And is run before the process runs. Whereas for second, the binary sets or brings its own rules and enforces them, which is nice if the application author provides those. But if they don't, that's kind of like problematic because you would need to add them yourself. Second is pretty widely available. So your browser, Docker, Firecracker, Firejail, lots of other systems can use it. And it should be used more widely, I think. So if you can and have some application that has some security sensitivity, it might be an interesting project to add second profiles to your application so that those can then be installed when you run your binary. If you want to have a platform independent way to interface with the Linux kernel system cores, there is lipsec comp that might make writing those rules a bit easier for you, which is I think the final tip I have in my slides to work with sec comp. Oh, yeah. Windows, that's quite a mouthful. So process mitigation system called disabled policy is, I think, the thing that is pretty close to sec comp on Windows because it also has restrictions on what system cores or process can invoke. But it seems to behave quite differently than what you have in sec comp and Linux. But like I said, I haven't touched Windows in a couple of years, so I haven't tried that one out. And I will try to avoid it if I can't help it. And with that, are there any questions? Let me try to find my chat again. Can you share that notes in the shared tab? Which shared tabs? Wait, the one thing that I wanted to drop just in case, once there are the slides, I find them give me one second. This is it. So if anybody wants to have these slides, any other questions? Sure, cool. The Jedi metric seems to be working today. Any other questions? We should have plenty of time, though I don't think anybody wants to hear me ramble for 55 minutes. So I think 40 was good. Any other questions? Sec comp, elastic related, audit D, whatever you want. If not, I wish everybody, and wait for interaction, sec comp. I haven't stumbled over any GUIs for sec comp, to be honest. But since BPF and everything seems pretty low level, maybe this is your chance to write a nice project on GitHub or wherever. I haven't stumbled over anything, but to be honest, I have never looked for a GUI explicitly for sec comp, writing sec comp profiles. The question, does the app centric approach still force you to trust the application? Yes. I mean, of course it does, but I mean, if you run somebody's binary, I guess you trust that the binary is doing the right thing. So sec comp is not going to save you from anything. Sec comp is just like if they have a security issue in their application, that they can add a layer of protection against that. It will not protect you against malicious binaries. So yeah, I mean, you can compile it yourself and add your own sec comp profile as another layer of protection. But generally the sec comp profiles I would see as a security feature that somebody who writes an application is security sensitive, adds as a benefit. It doesn't protect you against bad binaries. Does it make sense to use for web apps? I mean, web app is, it depends a bit what you define as web app. So Elastic Search also has an HTTP interface and you send it to Jason and send it back and you still want to protect it by sec comp. It's probably getting a lot trickier if you have a general purpose programming language and run that in an application server where you register those. But maybe so for example, what you might want to add are other features. So for example, what we have in Elastic Search, we have, like there are multiple checks of security things that we do. So sec comp is one, so you cannot fork the process or you cannot call another binary because we don't do that. Then we check that you're not running as root. And for example, we're also using the Java security manager. And with the Java security manager that's like Java security concept, there you could limit that you can only read files from a specific directory, for example, from this package. So you could with the Java security manager, you could say like, only this one package here is allowed to access these configuration files because no other part of my application needs to read or write anything. For example, only the thing or there's like a package that writes to the disk out, only this one package can write to the disk. So if you have a bug or problem in another package, it would not be allowed to actually write to your data directory. That's another layer here. I don't think sec comp is like vanilla sec comp will not be the solution for a generic web application, but it's one of the many pieces that you want to have in there. For example, I would need to check if for example, engine x uses sec comp. Maybe somebody has it running and can quickly check that the command was a bit earlier to see if for example, engine x could fork out another process or if that doesn't care, because that's I think where second will come into play. Yeah, to be honest, I have not compared it to sandbox init and pledge. I'm not really sure if anybody has any experiences with those. I would also be curious. I once had a discussion that somebody said like, macOS has a similar concept as sec comp, but it was less mature from what I know. I'm not sure if that is sandbox init or if that is also called sec comp on Mac. I have been very Linux focused here, to be honest. Any final questions? Or is everybody happy to head over to their Sunday afternoon, the next talk, a break? Cool. Well, if you have any other questions, this is Twitter. Just ping me if you have anything. Thanks a lot for joining on a Sunday, which is a tough day, especially Sunday afternoon. Enjoy the rest of your day. Thanks for having me. Let me know if you have anything else on Twitter. Thanks, everyone. Bye.
Why should you allow all possible system calls from your application when you know that you only need some? If you have ever wondered the same then this is the right talk for you. We are covering: * What is seccomp in a nutshell and where could you use it. * Practical example with Docker, Elasticsearch, and Beats. * How to collect seccomp violations with Auditd. Because your security approach can always use an additional layer of protection.
10.5446/51734 (DOI)
Okay, maybe I can start at least introduce myself a little bit. My name is Henrik. I live in Finland where I currently am also doing this talk close to Helsinki in a small town called Järvenpää, which in English means the end of the lake. It's of course one of the many lakes that we have in Finland. I've last spoken at Froskon I think maybe eight years ago or so, so it's really nice to do it again after a while. I currently work at Datastax where we develop Cassandra database and previously worked many years at MongoDB and before that with both MySQL and MariaDB. So my professional interest has been very much targeting open source databases and that's why I wanted to do this talk for you to look at the different kinds of NoSQL databases that we have. So let's start. Which screen has focus? Now it works. All right. By the way, also in my free time I maintain this project called Impress.js which I used to do this presentation. So it's a browser based presentation framework. All right. So I don't know if it's still the case earlier in my career when I started speaking about MongoDB and selling MongoDB. You would often have this discussion that should I use a relational database or should I use NoSQL? And maybe some of you might still think that these are the two options you have and of course there are lots of databases to choose from. So then if you start going through it, I think there are at least 200 or 300. In fact, the list is so long that the browser runs out of memory in this animation. So the point of this talk is to zoom in a little bit on the NoSQL side though where it turns out that NoSQL is not like one type of database that you can choose to use. So if we look into this category, there are five different categories or actually there used to be five different categories when I first did this talk. But now much thanks to success of Elastic Shirts for example. I've come to the conclusion that actually we should consider search as its own category because it's like quite a big industry already. And it has its own use case and characteristics. So then if you can see the NoSQL landscape broken into these six or so categories, then maybe it's easier to reason that should I use a key value database or should I use a document database or maybe between white column database and so on. So I want to spend the next hour going through the highlights of each of these categories which are some of the databases in each category and what would you typically do with them, what are the typical use cases and so on. So hopefully this will help you. The next time you need to choose a database for some project, you will kind of know where to look at when you know what your application needs. So let's start with the key value one which is the most simple also in functionality. And really this is kind of where the NoSQL movement started was with MemcashD. So of course if we go like 15 years back, most of the internet, it's quite amazing to think now most of the internet was running with MySQL and PHP, Apache Linux, so it was a LAMP stack. But to make MySQL faster, somebody realized that it's good to have a cache between the database and the PHP. So this is how MemcashD was born and it's a simple key value cache. So you just store objects in memory and get them with an ID. So this made websites faster back in the day, but of course it wasn't the database, it was just the cache. So if we look at this category today, Redis is the clear leader, which has some, it's still very much in memory focused but has some persistent built in. So you can use it for some use cases at least as a database. So key value databases I think embodied the kind of primary ethos of the NoSQL database, which was when we were using all relational databases, say look, we need much more scale today, we need them to be faster and actually they can be simpler. Like we are willing to compromise in functionality if we can just get more speed and more scale up to well maybe then hundreds of gigabytes today in my work I see customers using hundreds of terabytes or even up to petabyte for the database back end powering certain web services. So yeah, so the key value category takes this to extreme because it's of course extremely simple and typically these solutions give a good performance. But yeah, and when you think about the use case where we have let's say a relational database as the back end storage and then a cache in front, which you can also use Redis for today, part of the speed is not that Redis is faster than my scale, although it is, but part of the speed comes from the data structure. So in a relational database, of course, your data is stored in a normalized form. So it means the data that you need to show for a single web page, for example, is physically stored in many different tables, many different physical locations on the disk. But typically what you store in a cache is actually the serialized object that you want to show for that web page. So you might be able in many cases to show a single web page or a single rest response, you might be able to store in a single key in your key value database. And this already, like even if the cache and your relational database are equally fast, the fact that you store data in a different format in the key value database already makes it a lot faster. This is the case also for the next categories, why the column and document database. They typically all end up being faster than a relational database. And it's not because they are necessarily a better database in an Apple to Apple comparison, it's because the data structure is more beneficial for this kind of fast retrieval. Okay, but other reasons why key value databases end up being a fast choice. So of course, simplicity always is good, typically code can be smaller and you can focus more on optimizing it when you have less functionality. But the other thing of course is that these databases are designed to store everything in RAM. So of course, it's going to be faster than disk even in the age of SSD. Okay, I already talked about the fact that you to be restored the normalized data is actually a big part of it. And in a key value database, because they don't support range queries, so that is like a greater than or less than type of query. When you only select individual keys, you can use a hash index, which is a faster index structure than a B3 for example. And then the same for sharding, when you have keys that you can hash, sharding becomes quite simple. So this was early on where you could find good scale out solutions. Let's say 10, 11 years ago when when no SQL databases started spreading. So what does it look like? This is an example from Redis. So yeah, you have set and get commands. And there is a key in this case. First, I set the key name, and then I set the key age. And then there is a value which is in quotation marks. And what is interesting here is that also numbers are in quotation marks. Redis, yeah, so this is the simple case. So then on the client side, you would have to convert 43 to a number. So in a way, these are just like blobs and you could have anything inside quotation marks. But there is more. So let's see what do you use key value stores for. So yeah, of course, the original use case has been caching. And there are also use cases which are kind of caches, but where your Redis for example or other caching solution is the primary data store. So the data that you put in Redis when you use it as a session cache, is it necessarily stored in a more durable way in some other database, like a relational database. So a session cache is a good example. Depends maybe a little bit on the type of site you are in, but let's say a gaming site or something. It might be sufficient to have this kind of solution where when I log in to the site, my session key and all the data related to my session where I am. For example, yeah, these kind of video recorder applications or streaming applications like Netflix. They typically want to remember the position in some video that you were watching. So that if you take a break or if the power goes out, you can log back in and you can continue watching from the position where you stopped. So this is an example of some data that you kind of want to store, you want to keep the session state, but it's not terrible if the data is lost. So for this kind of use case, this in-memory database is pure in-memory. So of course in the case of Redis you actually can write to this, but some of these like Memcache D you can't, it might be sufficient because there is a small risk of losing this state. So okay, in the video example, for example, it just means I have to start the video from the beginning and find the place where I was. But it's not like I lost money or lost some important data. So another case to use this kind of databases is various kind of in-memory computing, a low latency computing, maybe machine learning. Yeah, different kind of recommendation engines might also fall into this category where the personalized profile or personalized marketing that is generated can be stored in an in-memory database. If it's lost, we can generate it again from the source data that was used to do this recommendation for you. And one use case that you could use these kind of databases for is also queuing depends a little bit on your requirements there. But of course it could provide good speed. So one more thing especially about Redis is that it actually supports more data types than just the quotation, quoted string that I showed in the example. So you still fetch these objects by key, but the value actually can have some more complex data type such as lists, sets, or maps, and even streams in the newest version of Redis. So this is a good example where even if we have this kind of category that says it's key value, products tend to maybe evolve and push the boundaries of this. Okay, let's go to the next category, which is wide column databases. And I would say this has been created by Google's Bigtable, but the most popular open source wide column database is Cassandra, so the one I currently work with. And to some extent DynamoDB from a user point of view has similar semantics as this. I'm actually not sure if we know publicly very well what is the internal implementation of DynamoDB currently. So what does a wide column database do? It actually looks a lot like a relational database when you first look at it because your data is in tables and the tables have rows and columns. So this means this has more structure than the key value database, you know, because each column could then also have different data types such as string or an integer or a decimal and so on. And of course also like in Cassandra's case, for example, you again can have maps and lists and even your own user defined types. So all of this sounds like a relational database, but actually in a wide column database, all data access happens through primary key. So well, in the kind of classic case, that's the requirement. So in that sense, it's actually similar to a key value database that you need to use the primary key to get your data. Then the data you get back actually is in rows and columns. The primary key can be composite. So in Cassandra, you can separately have a column or a few columns that is the partition key and this partition key is then required for fast queries because if you have a large Cassandra cluster, the partition key is the one that tells you which server is this data going to be found. So if you didn't use a partition key, which Cassandra does allow, it means you'd have to send the query to all nodes in your cluster and scan all the records in that cluster. And this would of course be quite inefficient. So it's the point of a wide column database is not to do that. In addition to the partition key, you can have then composite primary key. So you add more columns that are used as clustering keys. So this could be used, say, if my partition key is Hendrik, like my name, I could find all users whose name is Hendrik and then with clustering key, I could order them by age or something. So within the partition, you can still do these kind of operations like querying on multiple fields or sorting or others. So it's a bit of a hybrid somewhere between a key value database when it comes to the scale out functionality, but has some elements familiar from relational databases. But it's definitely not a relational database just to be clear. And if we look at an example, this is a Cassandra example, the query language is called Cassandra query language, so CQL, so almost like SQL, but not quite. So you create a table, columns have types, there is an insert statement and the select statement. What is interesting about Cassandra is that actually insert and update are both possible, but they do exactly the same thing. So this is because of the eventual consistency. So when you do an insert or an update, you cannot assume that, so let's say you do after each other, both an insert and an update, you cannot assume that these arrive. There are data nodes in that order. So it could also happen that the update arrives first at some node and then the insert. And this is why the internal implementation is such that insert and update essentially do the same thing. It's just a write or it's like an upsert actually is the name that we often use. So what use cases are these databases used for? From my career, I have never seen so large clusters as I see with Cassandra users today. So I mean, I may be seen some, but in Cassandra this seems to be very common that clusters are 100 terabytes and beyond some really big companies might have petabytes there Cassandra clusters. Also interesting for Cassandra in particular, the storage engine is right optimized. So typically you might use this for applications that do quite a lot of writes. Then like storing session state, if you want it to be more durable than in an in-memory database you could use Cassandra and other similar. Also actually this use case and by the way Netflix does use Cassandra for this purpose. So storing the position in a video that you are watching is actually quite right heavy because you need to store it again and again as you are watching the video. Depending on the granularity you want, you might want to store the position each second. I don't think they do that, but at least a couple of times a minute you want to store a bookmark. So this requires definitely a right optimized database. And the other interesting feature that these databases have is they are, well at least Cassandra and DynamoDB have been based, sorry this is not true for DynamoDB anymore, but Cassandra definitely still uses Dynamo protocol for high availability which again is a right optimized replication protocol. So even if one server crashes because it's a multi-master protocol, so even if one server crashes there is no short break when you cannot write to the database. So you can always write data somewhere and this is important. Well the protocol was invented at Amazon where the use case has been the shopping basket. So for them it was important that if I'm looking at the Amazon website and I wanted to buy a book and I click on the kind of add to basket then this must succeed. Because this is the point where I decided that I want to spend money, so it was a very high value right operation to the database. It must not be lost and there must not be like one second or five second period when the database is doing some kind of failover so you cannot write to it because even in five seconds they would already lose a lot of money. So yeah so this is an area I personally have been very interested about and I have written in my blog about Dynamo protocol but let's go forward. So again you know pushing the boundaries of what this category definition in the classic sense has been. So Cassandra 4.0 which is now available as beta actually has invested in a new secondary index type which should be more useful. So also prior versions of Cassandra and DynamoDB has had something like a secondary index but then when you read the documentation it says that you should only use it for like low granularity data and data that columns that are not frequently updated and so on. So once you finish reading this documentation you come to the conclusion that maybe I'll just use the primary key which kind of was the idea in the first place. But now this is a really exciting development I think in the Cassandra world in 4.0 more useful secondary indexes and actually in data stacks we have again a different data kind of secondary index implementation which we have contributed to the Apache foundation to be included in Cassandra but it's not going to be in 4.0 yet. So this is an exciting topic to keep an eye on and will certainly broaden the use cases that you can use Cassandra for. So let me talk about document databases next and of course this is an area where I have a lot of experience as well because I worked many years at MongoDB and MongoDB is kind of the leader of this category or not kind of they are very clearly the leader in this category but just as another example I wanted to mention Mark Logic which is a closed source database so why am I even mentioning it probably many of you haven't even heard about it. An interesting thing about Mark Logic is that it actually uses XML as its storage format so both and MongoDB uses JSON so both of these are document databases in terms of features they do very much the same thing and they just use different syntax one uses JSON one uses XML but the semantics user experience is very similar in a different syntax. So the key selling point here is the flexible schema so typically although it's possible but typically you wouldn't specify a schema like you do in relational databases or like you did in the wide column databases with the create table statement so you can just start inserting data into the database and different records can have different structures so each of them is JSON object and they can even be widely different but of course typically it makes sense that your application stores data that where at least they are somewhat similar to each other. And the logic here is that JSON for example let's focus on the case of MongoDB. JSON of course is in itself embeds the structure so if we look at an example you know I can insert here a record with a JSON object and I can already see that there are fields first name, past name and age and the two of them are strings and one is a number and all of this you can just see from the JSON syntax in fact JSON is kind of better than XML in this case in XML unless I specify a schema I wouldn't know whether the number 42 here is an integer or a string but in JSON there is a difference. MongoDB also adds some type like date which don't exist in standard JSON but it still follows the same flexible seed schema model so the date is encoded in the value here not just like an integer and string are different so you don't need to specify a schema up front. So this can be good or bad but definitely those who like this enjoy the flexibility that they can just start coding and iteratively evolve their database rather than needing to specify all of their columns up front. So the last point about document databases is that they actually allow creation of secondary indexes as well. So here I have created an index last name and first name so this means I could efficiently then query on last name even if it's not my primary key. In fact the primary key in this case would be the ID field so there is something here that document databases have in common with key value databases that each record even if it has flexible format still the primary key is an ID and the simple use case would still be to just fetch these JSON documents with the ID kind of like a key value database would be a simple way to use this. So what are the use cases for document database? So actually in the known skills space this is the category where these are general purpose databases. So yeah you have records with fields, you can have arbitrarily many, you can have primary key, secondary key so you can do all kinds of querying and sorting. So in many cases to some extent these databases or MongoDB competes with relational databases since some years ago also added transaction support and so on. So what would be the main selling points to choose this over relational database? So first of all many developers love using JSON and they might use JavaScript or they might use rest APIs in their architecture so it's very natural to also store JSON in the database. Flexible schema can be powerful, can also get you into trouble but definitely again if you just want to quickly get going it allows more iterative style of development. And then of course sharding which if you now look at this presentation you might say what is so special about sharding because also wide column and key value databases are good at sharding but compared to relational databases if we think about the document database as a general purpose database this is typically the strong selling point because even today in 2020 most of our classic relational databases are not so great with sharding like MySQL and Postgres. So then when you think about what use cases they are used for you should think about these events in the previous point that what would for example be useful for where would the database with the flexible schema be a strength so one application could be something like a data hub which is kind of like a data lake but more operational and more of a classic database and the point with the data hub is that you aggregate data from many source databases so kind of like a data warehouse but the difference with data warehouses in a document database you wouldn't spend time designing a star schema which might quickly become complicated if your data warehouse has multiple different data sources that each have different kind of data so how do you get all of these different source databases stored in the same data warehouse it would require a lot of planning how you design the columns and data types and also these source databases might evolve and change all the time. So a database an OSCLE database with flexible schema has a strong advantage here because if two source databases are different and also if they are different over time then you can just continue inserting them into your data hub because there is no schema that would prevent you from doing so. So now of course this is not like a magic solution that everything is now very easy so the difference between relational database now is yes we could easily insert data into the database but it might be more difficult to query because all the records are not the same so if we want to search by first name and last name for example they might be also records there that don't have those fields they might have the name in some just a field called name and so it's a bit of a mess and that's what it's the strategy here is to postpone the problem so instead of being difficult to get data into your data warehouse in the first place it's a bit more difficult to read it out. So in this category what is there more I actually struggled to think about it because with MongoDB for example for a long time it was missing transactions now it has it also has like views and other things so I think actually this is a fairly mature category and a lot of future development will be more incremental innovations such as just better performance or some better tools for analytics or integrations with other technologies and so on. So graph category Neo4j of course is the leader in this category and have I would say pioneered a lot of it because of some specific features I added as a second option just our own product data stacks enterprise which embeds Apache tinker pop project for a graph query language. So what do you do with graph databases so in a graph database obviously the data structure is a graph so the records are nodes and nodes are connected by edges and in fact both of them have can have properties so if you do that then the edges start looking like records as well but typically you think about the nodes as the main records and then the edges are what join them together to use a relational term and of course to do efficient queries you also need to use indexes here just like in the document database or relational database. So what does it look like so because it's a bit simpler I actually used now the data stacks example which uses the Gremlin language from tinker graph project and here so there is some initialization here to create a graph session but then on the bottom there you can see you know the this query where we query for a vertex so which is a node where the name property is Marco and then there is an out edge so he knows some other people and we want to output their name and in this case it finds two other people that Marco knows are Wadas and Josh so this was a very simple graph query it's almost like we could have done this also with a join in a relational database but why I wanted to show this example just this this fluent syntax where you have like dot and a function and then a dot and another function so for complex graph queries I personally like this Gremlin approach of using a fluent syntax because it's kind of easy to read as you traverse the graph it makes sense to me. So what do we use graph databases for so often and especially in the case of Neo4j these are used for analytics so you have some data set which is a graph then you put them in a graph databases and then you do these queries like yeah I want to find all of the friends of my friends who own a cat for example. This has also been used by the way in some of these journalistic cases where in the Panama papers for example I believe they use Neo4j because they wanted to find connections that you know what if Putin has hidden his money in Panama who are they connected to and which lawyer and which other which other people were connected to this bank account so you traverse this kind of network to understand the network of shell companies where they hide their money. Of course any social media is a network so this alone explains why this is a meaningful category there is there are a lot of data sets today which are graphs can also use it a lot for recommendation engines and so on because again recommendation engines often follow this kind of logic that you know yeah Amazon was early on famous for this recommendation that there are some other customers who bought this book and they also bought some other book so this is actually a graph query. And when we talk about analytics this is used a lot in national security if you remember the Edward Snowden revelations you know the typical case there is that which person called which other person with their mobile phone so again it's a graph query. So what does the future look like for graph queries an interesting observation here is that there are many different graph query languages so I showed you Gremlin which we use at data stacks Neo4j has developed one called Cypher which is completely different and it kind of looks like ASCII art even where you draw arrows different directions and now what is becoming popular is GraphQL from Facebook which is like kind of like a REST API but more like a graph it's an interesting combination there. So an interesting question from for the future is will there be one standard language currently maybe GraphQL seems to be the most popular one but it has I think it has some limitations for really advanced graph analytics so it's more maybe targeting operational applications. Okay and I said that graph is mostly used for analytics and in case of Neo4j for example I would say their database is the internal architecture is definitely more optimal for read queries or analytical queries and it's used for operational applications I'm not sure if it's very optimal for that. In data stacks our graph database is a bit more optimized for OLTP because it is running on top of Cassandra which of course is very much an OLTP database. An interesting sharding in general is a difficult problem now for a graph so in the case of our product for example that is essentially Apache Thinker POP combined with Cassandra we store the graph as Cassandra records where you then have this partition key and then the partitions are sharded over large cluster so it's possible but this is still kind of a hash based sharding so all of the records in the graph are equal and they're just spread out based on hash consistent hashing. So an interesting unsolved problem I think and unsolved in the sense that the product would actually exist that you can buy and use would be how to do optimal sharding for a graph database so this means if you think about like a social media graph for example I have some friends maybe some hundred of them in this data set so queries starting with me of course are likely to traverse to my friends and friends of friends so an optimal sharding would mean that nearby nodes that are likely to be accessed in the same query would also be stored in the same shard and in the same disk page and this is not how any of these graph databases that I mentioned actually work today so they more access all data equally so in the case of Neo4j for example you typically want your active data to be in RAM after in which case of course everything can be accessed like each hop can be made in constant time. Okay query engine I used to call this category Hadoop many years ago but in reality what people use today is Spark which has more or less replaced Hadoop and we shouldn't forget Presto which is actually powering Amazon Athena so it's used quite a lot. Presto was published by Facebook so now if you talk to some analysts or other people with opinions they would say that this category doesn't belong in this talk at all because they are not no SQL databases because they are not databases so this is actually true they are query engines so Spark and Presto query data that is stored somewhere so in the Hadoop case of course it used to be Hadoop file system but today maybe the most common place to store data is in S3 in Amazon and it could even as a use case be that you have just stored data in S3 like log files or something and then later on you realize that maybe I should analyze something in these log files to understand my users better and then you can just put Spark or Presto on top and start analyzing your data but you could also use databases as a data source so both Cassandra and MongoDB for example have a Spark connector. And yeah and these are definitely oh sorry use cases come later so to go back so definitely these are used for batch queries so not not all DB databases you know because like I said typically the data already exists in files for example in S3 so these are just used for read queries for analytical queries and sometimes can be really long running queries as well if you have lots of data so here is what it looks like in Spark so actually there is a lot of code here to create a Spark session this is actually like a shell this is not the programming language but the shell uses Scala I believe this is so even in the shell you then create a session and you have to create what in Spark is called a data frame so you have to connect or create a data frame that maps to a file in this case a JSON file and then out of this data frame you can create a view which you can query with SQL so after you have done those first lines then you can actually use use something that is familiar SQL and in my experience for analytics like in companies the people that want to do business intelligence they they actually like SQL because they have learned it a long time ago they and it's their standard and they prefer it over learning some other language such as in MongoDB's case for example the language is completely different so for developers that was usually okay but for the people who want to do analytics they definitely want the SQL okay so what are the use cases well this is the category where we speak about data lakes which used to be a Hadoop thing but today you know it's just S3 I think if you have data in S3 maybe people don't call it data lake anymore so what would you do well analytics machine learning is used for a lot of things nowadays so all kinds of personalization for detection again national security but then in the end you know it might just be a classic reporting just like old school data warehouses at the end of the month you want to provide some kind of report or maybe like a live dashboard so it's not at the end of the month anymore but but it has to be constantly updated yeah this is the modern version of a classic data warehouse I would say. Spark also has a streaming version which is interesting so real-time processing of data that is happening currently or that arrived from some place now but the Spark streaming version is really it's still batch queries it's just a very small batch of the recent data but it's nice it allows you to use the same interface and same SQL queries on a real-time stream and I guess I already mentioned Amazon Satina is based on Preston. And then for search and the crown jewels here is really the Apache Lusine project which then is used in solar which is a you know a server and Lusine is the engine that stores the data and indexes the data but now I would say the market leader has become elastic which which is a younger product compared to this also elastic embeds is based on Lusine as a data engine so in both cases Lusine is the real winner here really valuable Apache project. So what does a search engine do well you can search for words so in in all the other databases that I showed you search for fields so if you want if you have a name like Henry King or if you want to search only for the last name typically it means you have to store the last name in a separate field first name in a separate field but with text search you can have a body of text that in a database would be stored as a single field but the search engine Lusine will actually index each word separately so you can search for individual words and maybe even like some some wildcard patterns and so on and you can get results ranked so if you search for multiple words if some record matches all of those words or if let's say all of those words are in the title rather than in the end of some long text and then it gets more points and it gets higher up in the results and you can do faceting or highlighting so basically these are things this is kind of like an indexing use case but it's more complicated than you typically do with the B3 is in relational databases. Okay so what does it look like this is the elastic search example so it's actually a rest API with JSON records possibly this is already one reason why it became so popular so in the first row there we post that is we insert interest terminology one record and notice here now that the name is in a single field and then in the second row second query we use get so this is a query and we search only for my last name but because elastic has indexed each word separately it actually does find this record and in the results set there and in the bottom you can see that it's not just the name that is returned but actually we have also there is also like the index name which in other databases would be called the table but in the search engine focuses on the index so called Frostcon here and sorry the index would be maybe like what other database called the database and then maybe the type of record would be in people and then the ID of the record here is ID1 and this is because in the first row I used Frostcon and people and ID1 in the rest URL. So what are the use cases well one is a search engine so if you want to search box for your website kind of like a mini Google then this is these are the products you use but yeah also some other complicated queries that typical relational database or document database B3 doesn't cover and then elastic has this product suite called Kibana which is like an analytics solution so for log files for example you can use it for same purpose as splunk so you have some text data you put it in elastic and then you can immediately see for example word frequencies or search for errors and so on kind of out of the box it comes very easily from this search engine supplies it so you don't need to like you don't need to spend a lot of effort creating specific indexes or so because you can just index all the words and well log files is one thing but actually security monitoring then for example monitoring your firewalls or networks or so on or physical and I don't know why probably national security again is using these kind of solutions. So those were the categories and yes so just one reason I started taking this seriously as well elastic is actually one of the biggest companies in the noh SQL space it's younger than MongoDB but at least in some point already had higher valuation even than MongoDB so out of them are public companies so this is why we know so it's definitely a big thing and growing fast. So those were the technical details of all of them I also wanted in the end to say a few words and if this is like too much information on a single page don't worry but in the two years ago there were a lot of discussion in the database space where a lot of these products change licenses and well they I think mostly in reaction to Amazon taking some of these open source databases and offering them as a service on the Amazon cloud and of course Microsoft Google do the same to some extent which is what they do and some of these companies felt that that was a threat to them so many of them changed licenses and as you can see the trend is to the right so they yeah they moved from a more open licensing to a more closed licensing but in the case of Redd is the yeah Redd is moved actually to the right and then came a little bit back to the left but I should also point out this is simplified table so many of these products actually have more than more than a single edition like a community edition and enterprise edition and often something more even so this is just to get the picture of which of these changed on the last row it's an interesting development so elastic didn't change as such well elastic also changed they always had some some closed source components but yeah but they changed architecturally how they store them in the repository in response to this Amazon actually launched the open distro for elastic search so so in this case it's actually where yeah so so for example some authentication security related features which are only available commercially in elastic search Amazon of course wanted to develop and they have open source them for for their own use so so in this case it's actually the opposite from from what the discussion was in 2018 so so Amazon actually had a has the more open source version compared to elastic search so on the other hand you can see many of these didn't change one thing you can of course see that those projects that are governed by the Apache foundation for example cannot change the license they will always have they they will always have a patch a license because that's the only possibility for the Apache foundation alright that was all and I see there are some questions so maybe I will take this image and yeah we have a few minutes for questions where would you put influx DB so I saw a little bit of the talk but not I don't know if there was some architectural explanation in the beginning I missed it but I think generally there are there is a class of databases which are so-called time series databases and and there are various techniques techniques that where you store data so that it's efficient to query large amount of data and also compress on disk when when if you know ahead of time that that they are ordered by a time stamp for example or could be other other similar use cases as well so so in a way it is like a data warehouse but it's not you know it's not the same as doing a star scheme on Oracle or Postgres it's the performance difference can be huge like several tens of tens of times and then of course often they might have some some sharding or parallelism as well so so I would yeah I would maybe call it as a separate category time series databases and could that be like a seventh category in this presentation maybe but it's not the time series databases from a user experience point of view is not that different from a relational data so typically you use SQL and then it's just faster and you can store more data so so it's somewhere between some sometimes many years ago somebody proposed that there should be a category called new SQL because it was between traditional relational databases and then no SQL databases which are very different and where would you place Apache ignite I think I have forgotten what Apache ignite does I'm sorry if you want to expand in a chat you can and there weren't any more questions so I will just have to wait what do you think Rafael are there more questions or should we start to wrap up yeah feel free for more questions in the chat and we have still some time for them yeah okay this tribute in memory key value store with SQL on top yeah I see yes yes and there are others yeah there are others with this kind of combination as well so okay I think you answered your own question very well distributed in memory key value store that's your own top one this reminds me of one category I was also thinking when authoring this presentation there is a class of distributed database like Google spanner cockroach DB fauna DB which all try to be like present the SQL interface to the user and and try to match what what the good old relational databases do in terms of supporting transactions or well TP and providing a very high level of consistency so so many no-scale databases had this concept of eventual consistency where yeah where you use various techniques to deal with the fact that data arrives at different times in different servers so so I in this presentation I mentioned in the wide column database case inserts and updates are actually both upserts because you don't they might arrive out of order in that architecture so so yeah so so these near distributed databases typically try to provide a higher level of consistency so that they are distributed in their internal architecture but but would actually be more similar to to classic relational databases in the user experience and that's this this category I think is quite interesting from for people like me who are interested in database internals so I try to read up on them but at the same time I think they are still a bit the category is still quite small and growing so it's interesting to see where it's it's going in the future okay should the consumption of memory and CPU seeded as an important aspect to different the database systems yes so performance is an interesting question and often you do trade-offs in many in many directions so for example I mentioned that some of these databases are right optimized this means then when you read the data back actually there is more work when when you have in memory databases or data well I mentioned ready as an example of in memory or memory oriented database but also like I mentioned some databases like Neo4j I think work optimally when the data fits in memory so this is of course a choice for you probably your queries will be faster but but the architecture is also more expensive because RAM is expensive so so then some other other databases in this case then maybe like Cassandra and MongoDB are more disk oriented so similar to typical relational databases so you can have decent performance even if a lot of your data is not in RAM and then of course at the other extreme you have Spark which which can read huge amounts of data that that resides on disk and is so-called cold data and but then Spark again is a or all of these query engines typical is a good example of something where they typically consume quite a lot of CPU because the data might not be indexed for example so so yeah memory consumption CPU consumption and and by the way also disk consumption so I think the inflection question with influx DB and the time series databases typically achieve very high compression ratio because they can use a columnar database storage model can use different kinds of compression like run length encoding so then database might use more or less disk as well it's it's kind of a space inside which you optimize and and it always depends on your application and what kind of data you have. So yeah okay I think there was a comment there low for noise scale high for relational ah yes okay yes good point so I think yeah I think in the beginning I mentioned this as well that in many in many no scale databases they might actually be more efficient than doing the same thing in the relational database because the data is stored as an object for example in a key value database or as a denormalized document in a document database and and same also with with Cassandra actually typically a partition will store data more data together so in again in a denormalized way another question about noise scale data protection I guess I want this could mean two things now either you are referring to like security features or durability which means that that your data is safe on disk so if we speak about the latter it is true that originally of course noise scale databases the products were new and immature let's say 10 years ago and MongoDB for example still suffers from this reputation I would say my current database Cassandra never had such a reputation in fact in the relational database world I would say my scale and postgres have a similar similar difference okay so so let's talk about security instead it's it's different so but I think security so having different users having different permissions for users is something that has evolved I would say today actually today actually if I think about Redis Cassandra MongoDB all of them are are fairly close or equal to the relational database world I I would say security nowadays is quite good with noise scale databases but but this is again you know we we started 10 years ago with with simple databases that were focused on scaling and didn't have many other features and and yeah user authentication and security features was definitely one of them that were developed over time but but today I would say situation is good as it compliance most so so this is a this is another interesting topic is in relational databases we talk about as it compliance when when we want to say a database is good so so when I write the data to the database it's it's safely durable on disk and there is some consistency and anatomicity and isolation so that the yeah so my queries and user experience are what I would intuitively expect to experience and it's yeah already for a relational database there was like decades of research to get to the point where we are today with with isolation levels but with distributed databases this is like a whole new so in the whole new area so in in the sequel standard you have four isolation levels and then there is also many many relational databases support snapshot isolation which is not in the standard so so then you have like five but with distributed database you have like 15 or 20 different isolation levels and some of them can be very low because if you yeah if you insert data here on one server and then using the query and it goes to another server so so then you know the data that you just inserted isn't there and from a application or user point of view it might it might create situations that are unintuitive so for example as an application if I post something on a social media site and and then I yeah then I reload the page I would expect my post to be there but in a distributed system it could actually happen that you know my my post that I just sent isn't actually there and I can't see it but five seconds later I can so this is like you know how even if we had a lecture of another hour it would still only be a small introduction to this area I actually have such a presentation as well but but for example I mentioned the Dynamo protocol or Dynamo high availability in this presentation which I think is for me has been a really inspirational paper at the time and and it's it's a very smart solution where internally a Cassandra cluster for example using Dynamo internally there is this eventual consistency so data updates arrive at different times on different servers but using the Dynamo protocol you can then also issue reads in a way that compensates for this so that yeah if you specified in certain consistency levels in the Dynamo protocol then you can actually have a consistent experience so so it means that you can read if you write something you can also read it back and it's guaranteed to come back or or fail if you if you cannot guarantee that so very interesting area okay thank you I enjoy the questions it's usually better than the than the actual slides so it's it's good that you are active on the chat encryption at rest I think also belongs under the security heading and I didn't mention it I I have to apologize now this in open source conference I don't remember if Cassandra has it we definitely offer it that data stacks in our commercial version of Cassandra MongoDB MongoDB I think it's also only in the commercial version but I wonder if Percona has an open source version of encryption at rest for MongoDB so you can ask Peter Cite who is after me in this same her a sal I don't know I imagine Neo4j might have it as well because their customer base is security conscious and I also don't know about Redis security in databases top priority in it is it depends a little bit on your customer base but of course in Europe of course we have also European Union at such bar quite high in so encryption at rest for example is often required for user data so that you guarantee a certain level of data protection for data that has privacy implications so I totally agree and it's again a big topic that could be a lecture of its own okay are there any question left okay yeah there okay Hans has more comments about deep database breaches I think we will have to find you know maybe we can go to the in and hope for something to continue this discussion in a separate channel and there is another new useful presentation for new no skill new pieces me so so I'm glad that the yeah this this is hopefully a useful framework that you can use when you look at no SQL databases I'm glad you find it useful.
There are so many database products to choose from nowadays. It can help to understand the overall landscape and how the available alternatives can be grouped into five (or six) categories. There are so many database products to choose from nowadays. It can help to understand the overall landscape and how the available alternatives can be grouped into five (or six) categories: key-value, wide column, document, graph, search, and Hadoop/Spark. The goal of this presentation is to briefly explain strengths and weaknesses of each category, so that participants are able to choose the right tool for the job in each circumstance.
10.5446/51736 (DOI)
Welcome to the talk Open Trackers for Open Science with Daniel Gabins, Daniela Gabins and here. Have fun. Okay, hey, welcome everyone and I see people are actually trickling in so perfectly, perfect that we waited for a few minutes. I'm Daniela Gabins. I work at Leiden University in the Netherlands as a PhD student where I'm working on a project related to data mining and data science in general and very broad strokes. I get into my research a little bit later. Today I'm going to present on Open Trackers for Open Science. What do I mean with trackers? I mean activity trackers and then for Open Science we'll look into a few definitions that there are and see what activity trackers which impact activity trackers and the use of activity trackers in research as research devices can have on our perception of Open Science and doing Open Science with activity trackers where when does it work and when doesn't it work and what do we need for it to work properly. If everything went well you can see the second slide now which is the outline of my talk. I will start introducing what it means for us to work with activity trackers in behavioral research and in medical research. Then I'm going to give you a very brief overview of what hardware and software options currently exist and what people are using. Then I will give you solutions to the problems that actually are attached to the current hardware and software options. And then I'm going to ask the big question what's next. And there I hope to get everyone involved in thinking along of what are viable options for activity tracking for research. But let's look at what activity trackers are. When I say activity trackers I mean those wrist worn variables or variable devices. And to see if people are only locked in or are actually listening I want to ask you one question which is are you wearing an activity tracker at the moment. And we have a poll option which is open right now. So let's see if people are answering. Yes. Yes. Let's see. But it was half half I guess for now what we see. Yeah. Well I don't wear any. I don't even wear a watch. But yeah so this is very much. Can you maybe drop in the public chat what kind of variable you were using. That'd be also super interesting. I'll close the poll. I did not close the poll. Anyone want to share what kind of variables they're using. Garment. Yeah. Anyone with Apple watches. Probably not. Anyone with a device running Asteroid OS. I was hoping to find people that work with that have a watch with an Asteroid OS on it. But yeah. All right. Now that we're a bit awake. Let's dive into three personas. I want to share with you how participants in research experience using activity trackers in for research. So I'm going to give you three personas and then we're going to sort of walk through their what they're experiencing when joining such research. So persona one is Mark. Mark has two children and aged eight years. They are in primary school and they come home with a letter from their school with an invitation letter to join a study where scientists from the from behavioral science faculty will join the kids on the playground and take observations. So they will take notes paper and pen. They will also send around questionnaires and surveys that the children can fill in and that also the parents will have to fill in. And the researchers say that they are going to outfit the children with a variable that they will wear for one week during the day. So when the children come into school they will get a variable and when they leave school they will leave the variable at school. So this is persona one. Persona two is Janine. Janine is just out of prison. She has a story or a history of being in and out of prison actually and her coach so she's working with a coach to sort of get out of the spiral of criminal behavior. With her coach she found out that there's a few triggers that might lead to her behaving in a negative or unlawful way. And those triggers are that she does not get out of bed for several days in a row. And that she is in touch with a set of people that actually are sort of not good for her that will lead her back into criminal behavior. And so the coach tells her maybe you should join a study where we are going to actually look, use a variable watch and a smart watch. And whenever these triggers so this not getting out of bed and contacting those set of people happens someone will in your someone that you trust or the coach will be notified. So it's a system where she wears the watch and a coach gets notified so that she does not spiral back into criminal behavior. That's persona two. Persona three is Carla. Carla's mom is actually in a nursing home because she has dementia and cannot live at home anymore. And also here there is the case of an invitation letter by a university saying some research is interested in how or how much activity patients with dementia show if it's more sedentary behavior or very active behavior. And they want to use as well as with the children, they want to use a variable device that Carla's mom will wear during the day to track not only the activity or activity levels of hers, but also where she is during the day. And if she is using the park that surrounds the nursing home. So I hope the three studies are more or less clear. So we have the left one here, Mark is the dad with two kids in primary school. There we want to use the variables to check how children are playing on the playground, what they're doing the entire day. We have Jeanine who might want to use the watch to check if some criminal behavior might reoccur or not. And we have Carla where it's about dementia care patients and how much activity they display during the day. So my question and I'm going to leave this here in the chat as well is what do you think are the biggest alarm bells? Like if you were Carla and were asked to outfit your mom with a watch, what would be the big alarm bells that would start ringing in your head? And would you give people, would you give the people, the researchers, would you allow them to use a watch on your mom or on your child? Would you yourself use a watch if you were in such a coaching process? And what do you want to know? What more information do you need to decide? So if you, I see that one person definitely has found the way to the HackMD. You can just click on the link and edit the MD. And I can see that for the children's behaviors project, there's already people editing. The behavior impact on the child, yeah. Children getting used to tracking. So that's an interesting one. Actually, I haven't thought about that one. The children could just get used to wearing such a watch and being like followed, so to speak, by someone else. Yeah. Privacy data sharing is something that is definitely interesting. And regarding the dementia care project, someone is writing about the consent. Will she understand or will the mom then understand the consequences of the research? Yes. Interesting. Okay, I'll leave the HackMD. We can leave it open and see who else or if people want to contribute later. There's one more. Yeah. So I guess someone wrote split between groups of parents. So probably that one parent group says, yes, let's do the thing with the watches and another group says, no, let's not do that. Okay. You might be wondering if people actually come up with such studies and really want to conduct them. Well, I'm one of those. So I'm working on the dementia care project where we're interested to find out how much dementia care patients actually move during the day, how much activity do they display. And we also want to know if and how they're using a park that surrounds the nursing home. And the clue or the interesting point here is that this nursing home is actually going to open the doors of the dementia care unit so that patients that are usually locked into their care unit and cannot leave that unit without someone accompanying them are now free to leave that unit or that housing residential complex and can use the park. And so this is, and no one knows really what's going to happen. So it's interesting to look at that. And we are looking into using variables for that research project. The children is also a project here in Leiden where up until now they're using mostly proximity sensors or only proximity sensors to find out which children are playing with each other on the playground. And they also want to incorporate more information about how children behave on the playground and also track where they are on the playground because it's interesting for them to find out more about unstructured playtime for young children and what that does to the children. And the third one, the third one was the ex-detainee. We have here an organization called Exodus and they worked actually with the Hohru School in Amsterdam on a project where they wanted to find out or design a variable device that would give ex-detainees sort of a feedback on how they're doing with their own goals and the goal being mostly staying out of prison. So these projects actually exist. This is what I want to do. This is what I want to get across. The baseline is those projects assist people who want to use activity trackers for various reasons. Some of these reasons are on the slide. A, the tracking of activity of heart rate, location, interactions, but also something that is called momentary emotional assessment, which is short questions that are being asked at various times per day or various times per week. And people are asked how they're doing basically. Then the trackers are attractive because they are passive and almost non-intrusive. Having a watch is something that you can ask people to do. You can use them for longitudinal studies or for several weeks and they will gather real life data and real life data is data that is collected outside of the lab and that is very rich. It's also not the most beautiful data to work with because it comes with all kinds of noise, but it's very, very rich. So yes, people want this data and people want to work with the variables for these reasons. Let's look at our, the three projects that I just mentioned as to what kind of data we want to collect and what the participants are. And the data that we want to collect is some sort of activity data, which we can infer from accelerometry. EMA, so the emotional momentary assessment would be something that might be implemented in the children's project. Location information would be something that we're looking at in all three projects. Then for the ex-detainees, something like call logs or messaging logs would be interesting. For the participants age, we can say that the children are generally younger than 13, they're all primary school kids. For the ex-detainees, it's anything, the range is pretty big. And then we have our geriatric patients that are well over 65, most of them anyways. And then I have two more points of information that are coming in at a later stage, which is somatic health and mental health. For the children, we can assume that they're mostly healthy. There are hard of hearing children in that, in some of the classrooms. And then for the ex-detainees, we would say they're probably generally healthy. In the nursing home, we have old people, so they are geriatric patients. They will have heart conditions, for example. They will have mobility that is somehow limited. Then from the mental health aspect, again, the children, we have some special education schools in the data collection process there. So that can influence how people are moving, what kind of movement patterns they show. For the ex-detainees, the chances are that you get psychiatric patients in there. Not all of them will be, but some might. And in a nursing home, it's a dementia care project, so there are people with dementia in that group. Now before I present what kind of solutions there are, or what kind of, say, big tech solutions are often used, I want to say that there are good reasons for using big tech. And we need to know the reasons why people are using those big tech solutions to do better with an open source solution. So I'm not here to bash Apple. That's basically what I want to say. First up are our medical research devices. Medical research devices give us high quality of the data, they give us access to raw data. They are tried and tested. The biggest problem here is that they are really made for lab experiments or controlled environments. Plus, the medical research devices usually only track one, like they have one sensor or two sensors max. They are not smartwatches, they are usually very bulky boxes basically that you wear on your wrist. On the other side we have consumer grade devices and I sort of summarized them per platform that they are using. I hope that makes more or less sense. So we have Apple Watch Fitbit in Garmin, the big three. And then we have Android Watches. We have Tizen, so that's Samsung Watches, or the Tizen platform used on Samsung Watches. And we have AstroDoS. First up are the big three because most research that's being done is done with those big three. And I'll be very brief here because also time. So, Apple Watch. You can use an Apple research app, which some universities are actually using or collaborating with Apple. It's probably pretty difficult to get into such a program. Then we can use the health kit and care kit frameworks that they are offering. Here you have access to all kinds of health related data from the users. It is limited to a specific set of tasks that users can do and limited to the health data that's being shared throughout the platform. You could also build your own application for Apple Watch. Here again, you're limited by the certification process that you have to go through to actually launch the app. So, for example, background logging in the background at high frequency could drain the battery so much that they would not allow you to certify the app that you're developing. Fitbit. Fitbit works together with Fitabase. Fitabase is a company providing help to researchers to use Fitbit as a research device. You can use their web API and you can bulk download information per user. So, you could basically open a user account per participant and then ask the participant to send you the data or download it for them. That would be an option. Both the web API and Fitabase solution have the problem or the limitation that you cannot get data at a very high granular level. It's actually quite coarse. Then Garmin also Garmin works with Fitabase. Also they have a web API. It's called Health API. And they also offer a Health SDK. The SDK allows you to write native apps for Garmin watches. I did not look into the SDK. So that's that. The big downside of using these big tech approaches is the granularity on a temporal level. So for example, you would only get a heart rate measurement or estimate per minute instead of per 10 seconds. You would get a maximum of accelerometer data or no access to raw accelerometer data and only to the compounds. And with the compounds, I mean the activity classifications that are usually done as in the activities such as swimming, walking, sitting still. Those are the compounds that I mean. And so the granularity is very limited in these approaches. You have APIs that you can use, but then you depend on the tech companies actually allowing you to use the web APIs. And if they are shutting down those APIs, then you don't get your information anymore. And possibly certification issues. For example, if you're draining the battery too much, though you might not care draining the battery because you only need information for one and a half hours. But yeah, so this is the certification issues that can come along. Why is it that draws people still to using Fitbit or to using Garmin? One is the availability and scalability of research. So for example, in Germany you have Corona Datenschwende, which allows people to share their health related data with a national institute. And they want to estimate how Corona is spreading throughout Germany on a temporal and spatial component. And they can use this. So the idea here is that if you have enough people joining this Corona Datenschwende.de data collection, and if there is a signal in all of these compounds, then you will find it. That's basically the idea. And they can use this because there's just so many people using a Fitbit or a Garmin. Another reason for using a consumer variable is a design, is the design choice. And here I want to quote actually from a paper written two years back, where they worked with psychiatric patients. And so they write to enhance acceptability and minimize user burden and stigma. Widely available consumer oriented technologies were therefore considered. So they talked to their participants, suggested a few variables amongst which also the medical research devices mentioned earlier. And then the user groups favored the risk-run Fitbit charge due to its appearance as a lifestyle device as opposed to a medical device that is acceptable to both younger and older users and the ability to view metrics related to sleep activity via the Fitbit app. So here they work together with the participants and decided to use something that gives the participants more than just being a tracker. On a similar note, I want to mention again the compounds. So what I mentioned before that you don't get raw data, but you get activity sleeping, activity walking or running or swimming. So is it that we can use those compounds to do research? And there the question is what is your intent? So in the same paper, the authors write, we suggest that those variables, those devices actually work with clinical prediction, depending on the questions that you're asking. Our goal is not to draw conclusions about, so in this case it was only about sleep, about sleep parameters such as total sleep time or sleep efficiency per se, because if they were interested in sleep time or sleep efficiency, they might use something else, something that is validated. But rather our objective is to ask whether changes in longitudinal rest activity patterns at a within-person level captured using a variable device predicts deterioration in clinical status. So we're not wondering how changes over a long time period predict deterioration in the patients. They were not interested in how someone is sleeping, but how sleep patterns as approximated by the Fitbit actually help them predict behavior or an episode of psychosis. So it's about the intent basically. Same goes for the Corona datenschwende. So here you can see that they're collecting all kinds of information here, how many steps you've taken, how many calories burned, how many flight of stairs you've taken. All of those are a compound information, something that you get out of the watch that you cannot really validate or you don't know if it's validated. It's an approximation of activity in general. And here at the Corona datenschwende, as I mentioned before, if there is a signal, you will find it. That's the hope anyways. So in summary, we have solution for lab studies. So that's the medical devices. They are bilky, but precision technology. We have solutions for big data studies. Wide spread consumer grade devices, you have access to summary statistics, perfectly fine. For real life data collection, yes, if the intent is in agreement with what you're getting from the watch, then you can use the technology that is available. Now, when we look at our case studies, the biggest problem I see is that we have age groups here that are below 13 and above 65. I don't know how the compounds, so these activity classes, are generated. I don't know if they are trained on kids data or on elderly data. For the elderly, we have geriatric patients, so if we're looking at heart rate estimations, for example, might be difficult if you have a heart condition or if many people in your study population have a heart condition. We have dementia care patients, so their movement patterns might be completely different from a healthy population. So when we're looking at the compounds, not much we can do in these case studies. And then there was this privacy issue. So when we looked at alarm bells, where is the data going? With whom is the chart? Do I have access to the data? Who else has access to the data? And so I think for the privacy part, there's two questions that we should ask, which path does the data take from the variable to the computer of the researcher? And then another question that we have to ask is the privacy statement between the participant and the producer or the company, or is that a privacy statement between the researcher and the producer? And this is something that any researcher has to take up with their privacy officer. And not all privacy officers know a lot about clouds, for example, about data being sent around all kinds of service to then end up on the researcher's laptop. So from own experience, I would say take time and plan ahead to talk this through with your privacy officer. So at last, open signs. I promised it at the beginning, and now I have another 10 minutes or so to actually dive into the open signs part of my talk. So the definition by Foster is open signs is the practice of signs in such a way that others can collaborate and contribute, where research data lab notes and other research processes are freely available under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Let's look at this definition or take it apart. We want to collaborate and contribute. The question is, can you collaborate? Can you contribute if a research is really locked into a close project, basically, where you don't have access, such as an Apple Watch and Apple Research project? I would say it's very difficult to collaborate and contribute on such a project. Next question is, or the next point here is under terms that enable reuse, given that we're talking here about proprietary data and models, proprietary access to the data, so that's the definition of proprietary, the terms are not open. That's basically it. And then the question is redistribution and reproduction of the research and its underlying data and methods. Trying to reproduce something that is built on a black box algorithm on a model or on a compound, whatever you want to call it, will be very difficult because you don't know if, for example, this black box algorithm changes with updates of the watch. So say you have a longitudinal study, you have users use a Fitbit for like two years and they update their watch. It could be that the model that predicts how well you sleep changes over those two years. So you don't have a consistency there. That's not what you want if you want to estimate any effects there in the data. So it's a no-no actually for when you look at it from a robustness point of view. That being said, not every research project that, so research projects do not need to be open science to be good research projects. So there are reasons or situations where you cannot adhere to the standards that I just mentioned and they are still good and valuable research projects. But this is under the assumption that you want to do open science or open science as much as possible. I'm not the first one or the only one thinking about these things. There's for example a preprint by Nelson et al. from this year on how to use hot rate in bio-behavioral research and they have also a few good, a list of considerations that is very good. So if you remember one of the very first slides, there are three more house that I did not mention yet. And now we're getting into the discussion with you as listeners, as my audience. How can we open up those trackers and how can we open them up in a way that they allow for open science methodology or open science approaches? Okay, so I just want to give the question as a moderation so the microphone so I can talk if you like. Yeah, give me another two minutes and I'll use the three things and then we can open up the discussion I think. Okay. Yes, cool, thanks. Yeah. So the first one is the whereas or Android for variables. It's kind of, I'm saying it's a cheapish option because there are just many variables out there that run Android. I found this app here, Vada, which can track Accelerator, Javascope and Delight sensor. It's just a package and can be installed offline with no accompanying phone. So I found that very attractive. I didn't try it yet. So maybe there's people in the group that have tried this or another app on their Android watches and I'd be interested to know if this really works without having to connect or to send information via an accompanying phone app. Yeah, well, you will be locked into this, the version of your operating system that is under hardware. But that is probably something that we have to get used to. The hardware and firmware is a bundle, it's a package that like sort of, yeah, you don't really get it out of teasing apart, it's difficult. Again Android watches, you could probably also bulk download health data possible. Question again is, do you want to force your participants to have a Google account just to get their health data insured with you? My guess is no. That would be my first reaction to it anyways. Then I want to present our little homebrew. We used a Samsung watch and there is text missing here. That's not nice. So to the left you see a watch, this is supposed to be a watch. To the right this is supposed to be a laptop. And we have watched a device app and we have a command line interface for the laptop and this is how the two communicate with each other and they communicate via a local network. And the nice thing is that this gets us raw data. So we can actually do data science which is great because that's what I'm supposed to do for my reset. And the cool thing is that it's possible to extend to Bluetooth or proximity tracking, something like a heart rate and emotional momentary assessment because it's basically a smart watch and people can also interact with the watch. And the negatives is that it's still under construction and it has taken us a long, long time. You need good programmers to come up with a good solution here. It's all programmed in C. And so the first bulk of the work was done by third year bachelor students at our institute as part of their software engineering course. And then we have secured two of the students to work for us as student assistants and continue work on the command line interface mainly. But yeah, it's for behavioral researchers, behavioral scientists, it's not easy to find programmers who can do that work for them. And also the two students that we included in working for us, they are doing that not for the money because we cannot afford too much. We have a standard pay rate for student assistants and that is much lower than what they would get outside of the university. But they are doing it because they like the open source idea behind it. So getting people to support you here is very difficult, certainly for field researchers. And with our solution we are locked into Samsung devices. That's a problem or maybe not, but that's how it is. Question. AsteroidOS is also an option. From what I understand there's only limited lifestyle apps there. What other disadvantages are there regarding AsteroidOS? This is something that I really want to know from the community or from people here that are listening. We are, there is, in Leiden or in the Netherlands we're kind of working towards the idea of having such an open platform. And I already see that someone has shared, do you hear the rain in the background? I have lots of rain and it's really loud. I hope, Ormol do you hear me well still? Yes, I'm hearing you well and I don't hear the rain. Okay, good. Then I'll just continue working with this. It's really hard right here. Okay, so just very, very quickly, the mission of our little group that we came up with is to be or create an independent community of researchers and other stakeholders, evoking a cultural shift towards more sustainable research. And this community works towards a common toolkit, whatever and however that will look like, which is transparent, flexible to use and open for improvements and change. Sustainable research is better privacy, adaptable design, affordable and transparent. Privacy allows everyone in the community to take their own decisions and draw their own conclusions about the product. So have this control about their own, yeah, the control is basically important. And this is a mission that we worked on together with a few, a bunch of people from the wearables and practice community in the Netherlands. So now I talked a lot for about 45 minutes or something like that. I hope my story made some sort of sense. And I want to ask you another poll, which is a truly open activity tracker, an option. Like is that something that we should look into or should we just forget the idea? And let's make it a yes, no, and you can fill in open as you want to fill it in. Yay, people are for it. So you know that I'm going to ask how to do that in a while, right? Yeah, so we have like eight people saying, okay, this is absolutely an option. So now the question is, is it worth exploring the asteroid OS plus custom smartwatch? Do you think that this is something and now how do I close the poll and open the poll again? And or more help by the last time. Yeah, but then I now I getting the published polling results, maybe once I publish them, maybe I can open a new one. Yeah. Yeah. Okay. So yeah, so eight people said, yes, it's worth exploring the asteroid OS and open an open approach. So now I want to know if it's worth to explore the asteroid OS plus custom smartwatch. And it's again, I guess, and no. And please also use the public chat if you think that maybe another combination is useful. Absolutely asking the echo chamber here. Everyone is like open, open, but yeah. I like echo chambers, then you can get you get you hear what you want to hear. Right. Oh, anyways, let's go back to the poll. No, there is someone saying that the asteroid OS plus custom smartwatch is not a good idea. Why is that if you want to share that? Or maybe you don't want to share it, but I'd be very interested to hear who said no here. No. Okay. So the person who said no did not. Doesn't want to answer. All right. Yes. And then the third question is already being answered in the in the public chat, I think. So I see here open hack as an idea and GNU health. Which I don't know, but maybe we can open the floor to questions and everyone because we're only 10 people here. So yes, can we do I can enable their participation? Yes. So now you're unable. You can talk if you like and or you write something in the chat if you like. Yeah. Or raise your hand if that is an option in here. Yes, I think it is. So what I would be interested is Cy revolt if he she is still here. They are still here. The GNU health. Is that a project in Europe or in like, how do you know about it? Okay. So you actually know a person who works on it. Okay. Perfect. I'll definitely dive into that. Is that in Germany? Is that a German project? Okay. Super. Yeah. So thanks a lot for sharing that. Also open hack is being shared here. So yeah, I don't really have a slight anymore and I'm done. I didn't even attach a slight on how you can contact me or if how to contact me is not included, but you can send me an email and my name is on the first slide. And I think there's not so many Daniela Garveins certainly not in light and at university. Or more. Do you, I don't know how usually people are doing this with questions. Yeah, normally it is so that the people can write something or they can ask something if you like, so they can unmute the microphone and ask the question or write it down in the chat please. Okay. Yeah. So I just left my email address in the chat. Perfect. Thank you. I hope my story made a little bit of sense. I honestly had problems in how to structure it in a way that like behavioral scientists, developers and people doing data science kind of understand the entire jazz because I find it a quite multifaceted problem. But yeah, that was the challenge today I guess. So if nobody have a question, maybe I have one. Which thread are you using personally? No, so I don't use any device, any trackers because I might trust them, but I don't wear watches. So I never wear one. But that's the main thing. But would you recommend one of the trackers? For private use I never looked into them. Okay, that's a question, did you start? Yeah, I'm wondering whether we have more people in the group who have used or programmed variables for research. Yes. Is there research software engineers maybe in here in the group listening in? I'm sorry. Yeah, not too much yet. Software engineer, yeah. Yeah, I find it very difficult to find because app development seems to be really a subgroup of developers and finding them to also communicate with researchers, I find that very tricky. So yes, yeah. Okay, so I have to read out the questions from the chat. What I can say is that the closed state of the hardware doesn't help very much. We would need more lower level access. And then the firmware project or open firmware, definitely. I think, no, I might have skipped that. So whenever you're deciding for a platform, you're deciding basically for a bundle that combines the hardware, the firmware, the operating system, the software stack that is on the operating system. So you sort of decide for such a sandwich or some, yeah, an entire bundle that you can very hard take apart or tease apart. And I think that that is the main problem. Or this is the main problem that you're not using. You're not deciding for, I want to censor A and operating system B, but you're really deciding for a bundle. Great. So, yeah, thanks for your feedback. I, oh, how much? I don't know what SPS is. So someone is asking how much SPS do you need for analyte? Oh, samples per second. Okay. The data scientist in me says as much as possible. So go as high as possible. So for the Accelerometer data and the Gyroscope data, we're going at 50 hertz, I think. So 50 pieces of information per second for the GPS location information. That depends on how quick the people are walking and, yeah, how quick they are for the children. We're going probably at a higher sampling rate than for the older people for our residents at the nursing home. Which by the way, doesn't mean that all nursing home residents are very slow. There are, some of them are very quick. So yeah. Okay. Then I have one more comment here. Agree close statement, close say problem. You need more funding for people power to do the open thing in this echo chamber. Are there any ideas for funders? And yeah. So if, do you know of any possibilities? No one knows about funding possibilities for open hardware projects or yeah, probably open hardware for science projects. I know that there's Mozilla's open hardware program, I think it's called. Any ideas for funding? MNT reform. And just one question again for Ormo, the public chat is not recorded anywhere, right? So I would have to just, that's right. Correct. You have to write your notice. You can copy paste it if you like. Okay. I think you have the rights for this. Okay. Perfect. So, but you have to do it before I have to close the session so that you have it in mind. But yeah. Yeah. Yeah. Okay. Oh yeah. There's someone taking notes already. Perfect. I think my time is up. So I really don't want to keep people here. And I think there's also people already dropping out for longer than needed. Yeah. Thanks a ton for the feedback. And or the information shared in the chat. That was very useful, definitely for us. Yeah. You can send me, drop me an email. You can find me on Twitter if you're interested in our research there. And then you pretty quickly find all the other people that are in that, say, working group on open trackers in the Netherlands, that is. And yeah. So, thanks for Frostconn to invite me or to allow me to give a presentation on Saturday also. Yeah. Thank you very, very much for your lovely talk, Daniela. And I mean, normally it's give it a plow. I'm sorry. But maybe the person have to like to make your microphone on and clap, but nobody have to. So, yeah. Okay. Thank you. Thanks. Bye. Bye.
Activity trackers are used in a wide range of research, from movement science to psychiatry, from criminology to rehabilitation science. In this talk, we will outline the requirements for activity trackers to be used as research devices. Legal and design aspects will be highlighted as well as the view from data science and field research. We will show how an open source alternative to the currently used devices is a ( is the only?) way forward. Activity trackers, such as fitbits are widely used amongst researchers to collect physiological and behavioral data in lab settings as well as in natural settings. In this talk, we will focus on data collection in natural settings and first outline the requirements a research device has to fulfill. More specifically, we will highlight the data privacy and protection aspects, as well as aspects of data collection that allows not only social scientists to use the data but also data scientists to learn from the collected data. In the second half of the talk, we will first present the platform we developed to collect data with a Samsung customer grade fitness tracker and then suggest an open source and open hardware solution to overcome the obstacles that researchers currently face. The first design decisions to create a community lead platform are presented and the design process highlighted. In the last part of the talk, the audience is invited to contribute with feedback in a structured manner, to make the next steps towards a modular and open Fitnesstracker for research purposes. The work on the Samsung platform is done in cooperation with Peter Bosch, Lieuwe Rooijakkers and Frederick van der Meulen (all CS students at Leiden University). The work on the process design has been done together with Assia Kraan (Hogeschool Amsterdam), Ricarda Proppert (Leiden University) and Klodiana-Daphne Tona (Leiden University and Medical Center).
10.5446/51737 (DOI)
Hello and welcome to my talk on remote first or on avoiding video conferencing overload during times of a pandemic. My name is Isabel Threstrom. I work as an open source strategist at OOPACE-Argy. I'm also a member of the Apache Software Foundation. Now a lot of what you will hear about in this presentation is based on my experience at the Apache Software Foundation and based on patterns and best practices that we've rolled out over at OOPACE. I'm also co-founder of Berlin Buswords, which is a conference on all things search scale streaming and data analysis. And typically what I would do now is to invite you to travel to Berlin in June next year and to use the conference as an excuse to make your employer pay for your trip. And apparently currently this is not a particularly good idea to travel around the world. That's why Berlin Buswords this year moved to an all remote conference. And the organizing company behind it called Plain Schwartz located in Berlin, they did a very good job at moving all of the engaging and all of the socializing stuff into the virtual and digital world. So if you need help with that, talk to them. I'm also co-founder of FOS backstage, which is a conference on open source by the right all of the things that happen behind the scenes. That's gonna happen regularly and digitally next year as well. Now a lot of the ideas in this presentation are based on a talk I saw in Amsterdam at ApacheCon back in 2008. It was a talk on asynchronous decision making by Bertrand de la Cretace. If you are interested in that topic, I would like to invite you to head over to YouTube after my presentation. Go to his recording of his FOS-TEM 2018 talk because there you will learn many more patterns on how to make decisions and how to avoid meetings. Now typically what I would do now is to take my microphone and ask you a couple of questions or I would do a tiny little raise your hand exercise. This is remote that's not quite as easy anymore and as this is a recording that's particularly tricky. That's why ahead of this talk I created an as a pet, which we will use throughout this presentation in order to get you engaged. Don't worry if you didn't type the URL just now, it will be read with displayed on each slide that has a question. So let's get interactive. Remote working in five minutes. Take yourself a couple minutes and think about what is the first sentence that you think of when hearing remote first. Either a corporate setting or an open source project setting for instance. Either a regular Lolli or now during a pandemic. Right over to the other pet and write down what your first bullet points, your first sentence would look like. Study, study, go. Okay, ahead of this talk what I did was to also interview a couple of my friends and colleagues and people that I've been communicating with at Berlin Buzzwords on what their experience was moving to a remote first or remote early work environment look like. Let's have a look. One of the feedbacks that I got was I lost all flexibility. We've moved all pairing to video conference sessions and we pair all the time. So I'm still on the same schedule that I would be on during the office, but there's no flexibility in terms of what we do or where we have a coffee. Another one was it's exhausting. I mean back-to-back conference calls all the time. So instead of making more time, this person realized that they also feel the time that typically you spend switching meeting rooms to fill that was additional meeting conference slots. Another told me it's worse than on-site meetings. What could have been in email still was a video conference. So instead of having accidental communication or near desk communication in the office, we now schedule a video conference. You have to spend a couple more minutes in order to get it all up and running. Just exchange a few information pieces or ask a quick question. Another one told me it's chaotic. I make time for the conference call, but it doesn't have an agenda. It doesn't have any moderation. But there's also positive signs. These people telling me I managed to get more done. There are less distractions. There's another one who told me colleagues moved to another country. Remote really works for us really well. So what's the difference between those two groups of people? Why does it work on the one side and doesn't work on the other side? During this talk, you will see a few patterns that you can use in order to make it work more smoothly. But in order to be able to transform your in-office setting, we will first look into which types of communication happens in an office. In an office, you have something like a coffee machine. People go there and make their coffee chedd a little bit. And this seems random. It seems unimportant. Except that random people meet there. They do talk not only about small talk stuff, but also about work-related issues. So you do have an organic information flow within your company. You probably are pairing in order to implement new features or fixbacks. So two people sitting in front of one computer. There's a lot of teaching happening, a lot of information exchange, and fairly intense focused sessions. There's team communication happening at a desk. In an HRL setting, you want the entire team to sit at a desk because this speeds up communication. And clearly, what you observe is it's very fast to ask questions this way. And everyone sitting around can hear those questions. They can answer as well. They also hear the answers. So information flows fairly quickly. You also have something like formal team communication. Say you do have daily stand-ups. You have sprint reviews, sprint planning, sweat-respectives, et cetera. Say you're well-prepared, fairly formal, but still get people together. You typically have cross-team meetings where you do some kind of planning, where you do some kind of architectural decisions. Likely you also have something like company all-hands meetings where there's a few people talking, many people listening. And of course, you've got something like lunch breaks, where either in a one-on-one setting or as a small group, you go out, you have a meal together. Maybe you talk about non-work-related things, increasing bonding between team members, or even about work-related things, having that informal information flow again. Now, if you think about simply switching to video conferencing technology, what's easy to replicate is something like formal team communication. It's something like cross-team meetings where you have a video conference and you exchange ideas. Likely you have a digital whiteboard where you can move around with post-it notes. It's also easy to set up something like company all-hands meetings where you have few speakers and many people listening in. However, it's the informal, accidental communication that's much harder to transform. So what kind of communication happens? We will head back to the other pet. I've prepared examples of what kinds of answers I'm after here. Take yourself five minutes and think about which purposes for communication can you think of? Why are you communicating? Why are you trying to talk to your colleague? And the other question, which properties of communication can you think of? Is it formal? Is it informal? Archive? Et cetera. Which properties of communication settings can you think of? Ready? Study? Go! We are not asking Mr. Oh Oh Oh Oh Oh Oh Oh Oh you Okay, couple more seconds. So, likely you found similar purposes and similar properties. You do communicate in order to share information, but you also do so in order to socialize. Let's say, coffee machine examples that we had before. You are giving feedback and receiving feedback. You are motivating people. You are trying to resolve conflicts. And you are making decisions and teaching. And in terms of properties, it's maybe something like formal versus informal, what I said already. There's a difference between archived and deniable, a difference between transparent and private, and between high bandwidth and low bandwidth. With bandwidth, I don't mean like technological bandwidth, but what I mean is how likely it is that there are misunderstandings. If you meet in person, face to face, you will hear my voice, you will see what I do with my hands, and it will be fairly easy for you to identify if there's something ironic or sarcastic that I'm saying. You will see me smile, or you see me sad. If you only read the text, all of this is left up to your interpretation. So it's much harder to understand something correctly. So our goals for remote first should be to gain flexibility, both in terms of location and in terms of time. If we do remote first, why still stick to meeting times that require everyone to work according to the same schedule and to be in the same time zone? You can be much more flexible. However, what you want to achieve is to transform office communication to digital alternatives, and not only for the formal stuff that everyone is aware of, but also the informal stuff. What you want as well is to make things as transparent as possible in order to increase innovation speed through transparency, much like we've seen during the pandemic, where science moved to an open publication model, which meant that publications came out much more rapidly, and scientists could take the experience and the learnings of others and base their research work on that, which meant that progress was way faster than before. What that also means is that our organization has to become much more tolerant towards people making mistakes, because mistakes apparently belong to learning. So you want to have a setting where people are not afraid to share work in progress, and where people are welcoming work in progress being shared, and where people are open towards spending extra effort on mentoring and explaining. As a first step, let's focus on in-project communication. Just one team, one project. If you look into the office, what happens is what one could call mass media. It's something like, I found a bucket module X, go to person Y, go to Bob, he knows best how to fix it. I found an issue with module Y, go to Alice, she can help you. If you move to a remote setting, and if you get a bit of inspiration from open source projects, that's very different. Why is that? So this one-on-one communication, this mass media scales fairly well towards a team that's roughly pizza-sized. Above that, the connections just grow exponentially and it doesn't scale anymore. So what we want instead is a central hub within Apache. This is a mailing list, but it could be pretty much anything. It could be a good project as well. The funny thing, if you have a mailing list and you have a question, one person goes there and asks this question. Many people see the questions and everyone can participate in answering and putting their perspective to it. So you do not only benefit from the wisdom of one person, but from the wisdom of the entire team. Plus, you have many more people who potentially have the same question or didn't even think of that question, who suddenly see it and see the answer. So they are learning in a drive-by way. What happens as well is that those people who used to ask that question at some point become active themselves, taking load, like support load, off of the core developers answering those easy questions as well. There's a funny thing about the mailing list at Apache. Like, if you hear mail, it sounds very awful. Mailing lists there mean it's a regular mailing list. Do you subscribe to it? There is an archive. The archive is searchable. Each message within the archive can be accessed through a link. So you can find previous answers. Does that mean that users will go and search the archives? Often not so much. Is it still helpful? Yes, of course. Because suddenly you don't have to retype the entire answer. But you go to the archive, you look it up, which is much faster, and you present the link to that answer. If you do that often enough, what you can do is to take that link and put it into a more structured documentation format. So what you end up with is one central hub, and many people communicating through that hub. There are no longer people going to specific other people, but they go to the central hub. If it's project-related communication. Clearly you don't have to have only one project. You can have Project Unicorn with its central hub, and you can have Project Kitten again with its own central hub. Now, what can happen as well? Something that's slightly more tricky if you don't have this pattern. If there's a dependency between Project Kitten and Project Unicorn, one of the developers can simply go to the central hub of Project Unicorn, ask their questions, and get their issues resolved. So you don't have the entire escalation path through management in order to talk to someone else. What can happen as well is that this person from Project Kitten stays active within Project Unicorn and learns how they are doing some things. How do they do CI CD? Which kinds of dependencies do they have? And which kinds of libraries do they use? And is that something we should do at Project Kitten as well? So suddenly there is an informal flow of information between those two projects. Same can happen the other way around. So what happens also is that people can become active not only in answering questions, but also like we've got an issue. We are using Project Kitten within our dependencies, but we need a change to be made. We can make such change ourselves, potentially with a bit of mentoring and help from Project Kitten, and get it resolved without going through the entire prioritization cycle. So what you can do there very easily is share information because it's fairly neutral. What you do as well is to give feedback. You send a patch in, you get a review back. And if this review is good, it will be motivating for the person who submitted it. It works sort of kind of well if you know how to do moderation in writing to resolve conflicts. Especially in the review case and in the question answer in case it also works kind of sort of well for teaching. If you know how to do it, you can also make decisions this way in an asynchronous way, avoiding meetings for tiny decisions. And if you scale it up as we will see later in this talk, also in order to make larger decisions. Communication there is archived. It's very important. It's transparent for everyone who wants to participate, so it's not only transparent for one team, but it's low bandwidth. It will be in writing. What happens as a side effect is that you generate passive documentation. Passive documentation here, meaning you capture everything that was said and you can reference it later on and don't have to retype answers. And you can take links into that archive to particularly influential emails and put them into your structured documentation. So essentially what you have with this passive documentation is not the final docs, but you have a very good baseline. So let's go one step further. Let's add a few communication channels. If you have a very bad conflict, if you want bonding between people, there's nothing better than meeting in person. A very high bandwidth, humans are great at reading faces. It's why we open to those people also like conferences, like Frostconn. It's not only about the hard content that's being talked about in the presentations, but it's also the hallway track. It's also the wall pit. It's also having a meal together, simply hanging out together. However, it's expensive to set up because it's synchronous in both time and space. You have to get people together in one location on Earth at the same time. So even before a pandemic, this was already expensive to set up, it's worse now. Plus, it's not particularly durable. It could be because this kind of event has to be repeated for every human new cheesy project. That's very good for motivating people. It's very good for socializing. Everyone knows the BEER event after the Frostconn Saturday. It's very nice for resolving conflict as well. I know several people within Open Source projects who had very intense discussions on mailing lists, who then during a conference met over their favorite beverage. Suddenly, everything was easy and was simple to resolve differences. So never underestimate meeting someone in person. It's also fairly informal, but it's very high bandwidth. Because of it being particularly expensive, I wouldn't do an in-person event only for sharing information. What else can you do? You can reduce bandwidth a little bit and do a video chat like we do now. You mostly see faces, you see less of the body language, but you still hear the tone of voice. You see if I'm smiling. But it's still fairly expensive to set up because it's synchronous in time. Everyone needs to wake up at the same point in time. We've seen that at Berlin Buzzwords, which is a conference that typically spreads globe. We've also seen it at the InnoSWIS Commons Summit, where we have people from Asian time zones, from European time zones, from North American time zones. Someone always has to be awake at midnight. Plus, it needs good technologies. That means a good uplink camera and microphone. However, it's barely durable. Sure, you can hit the record button, but imagine having to watch all of the video chats when you're joining a new project. Or a new company. So it's fairly infeasible. Okay, what's an alternative? Reduce the bandwidth even further. Go to online group chats. That used to be IRC. Nowadays, cool kids use Slack. Difference is not such a large thing. Yes, you do have a history. You can search it. Plus, you do have a little bit of threading. But it's text only, except you have a few cues like someone's typing. It's rather cheap to set up, but it's still kind of sort of synchronous in time. A conversation in Slack that starts 8am in the morning, and then only continues at 8am in the evening, and continues on another day, and yet another day, doesn't feel very good. It's pretty durable. You can search it, you can skim the logs, but the logs are very, very unstructured. There's not like one topic, and a few messages that belong to that topic, but typically they are interleaved. You can use a web forum, low bandwidth again, but suddenly it's easier to use that in an asynchronous way. You have messages grouped by topic, and you can follow the conversation based on that topic. You can search it, and you can follow archive discussions. Same is true for mailing lists. If you have a decent client plus an archive, which has a search functionality, if you need a bit more structure, go to your issue tracker, you will find that there. However, your issue tracker still is fairly fine grained. It will be very hard to deduce the architecture of your project, just looking at the issues in your issue tracker. You will need something more in order to deduce that. Wiki pages can serve that purpose as like a first step, so you're well structured, but they are easy to give high level views. If you want something really well structured, really well thought through, create a web page where you collect information not only for first time users, not only about issues that you know about your software, but also on how to get involved, on where to find the source code, where to find the continuous integration system, where people meet in order to communicate, be it a mailing list, be it a Slack group, whatever, and include stuff like style guides, or stuff like how to run the tests, and which kind of options there are in your testing system, etc. So essentially, lesson learned here is that you want one canonical place for keeping current status. So you don't want meetings only to give status, but you want to provide status as a self-service. So that someone watching your team from the outside knows what's going on. You want one canonical place for keeping documentation in order to avoid repeating yourself, and you want one canonical place in order to track previous decisions. You want to provide your project with long-term memory. Why did we implement feature X? Why did we choose architecture path A over B when implementing this? So let's take this a step further. How about scaling decision-making through transparency? So far, what we've looked at is decision-making and communication within one single team, and across team boundaries, especially where two teams are dependent on each other because they work on components of the systems that are sort of related. What we want to do is to make decisions at a higher level now. Let's look at an example. How do you make meetings with dozen self-agenda items taken hour or less? That's something that I learned at Apache Software Foundation Board meetings, and it's something that it can take over to your organizations as well. So first step. Such a meeting typically has a mixture of, I want to share information. I want a decision on a tiny issue. I want a decision on a larger issue. I want to work out a solution. Make all of those agenda items available for reading well ahead of time, at least two to three days before the meeting happens. And those agenda items, I don't mean just the bullet points, but really the entire stuff that you would share during that meeting. And if it's multiple people that want to share stuff or that want to discuss stuff, put all of that into the agenda. So what happens is that suddenly the agenda looks pretty much like the meeting protocol after the meeting happened, before the meeting actually happens. With that protocol, what people can do is read through the protocol. So everyone can do that on their own time. They can do that while drinking a tea. They can do that while sitting on the balcony. They can do that in the evening. They aren't confined to one time slot. Now enable pre-approvals. Like someone who agrees with some agenda item, gives them a chance to mark this agenda item as, yes, I agree, we don't have to talk about it. If everyone at the meeting agrees, or if enough people agree, depending on your decision-making process, move that agenda item to done. Don't even look at it during the meeting. What we observed at the Apache Software Foundation is that suddenly an agenda that looks like this shrinks to only those items that really require discussion. Everything else people can read, decide, or discuss ahead of the meeting on their own schedule. This also means that you have to enable asynchronous communication in order to clear questions. Not everything is clear and without questions, if you formulate it the first time, if you give people a chance to discuss these things on an asynchronous channel, the chance that items will move to the done category are highly increased. Now does that take rocket science? Does that take awesome tooling to get done? Not really. What we use at the Apache is that this protocol, this agenda, is being shared in a text file which is checked in the version control. So everyone who participates in the meeting days ahead of time can go through that, read it, and make their notes. Now, apparently our agenda is larger than just a couple dozen entries. It's more like 70, 80, 90. So the text file now is the data storage backend for web frontend, which is remote friendly because it enables offline editing of that file and puts it in a better format. Essentially, all you need really is a text file that you put in version control. For communication, what does the foundation use? Just the teleconference. Could be telephone, could be video conference. Nowadays it's a combination of video conference and for those people not willing to share their video, they still have a dial-in number as a fallback. So if everything breaks down, you have a dial-in. And for the meeting itself, to make it run more smoothly, you do have a back channel on IRC or Slack, where people can communicate about little nitty-gritty details, so you don't have to make it on the agenda. And it's not only nitty-gritty details, it's also a lot of jokes that make it into the back channel. Okay, taking that to the next level. We are now talking only about projects. But what about open and transparent decision-making? There's a book from Jim Whitehurst called The Open Organization, which has a very nice concept. In order to drive engagement and collaboration to the roots of an organization, you need to get people involved in the decision-making process. Okay, now your entire organization is a bit larger than your team of six to seven people. How do you get all of these people involved? You can't put them in the same room and get them to discuss things, even if everyone is on site. Now, the thinking behind that is that making an executive decision by Fiat is very fast, but that's where the real hard work starts because now change management starts. The idea is to get people involved in the decision-making process, which is slower, but then you hit the ground running once the decision has been made. You don't have to convince people about it anymore because they already own what they decided. So essentially, what that means is very familiar for people coming from the open source world. It means opening up not only the software, not only taking the package, putting an open source license to it, and publishing that. It means opening up the entire creation process. So what we have seen in Germany happened was the coronavirus warning app. Not only was the software published, but the entire creation process was visible. That means that the community could come along, file bug reports based on issues that they found with it, and could have that fixed before it was released. It also meant that acceptance of the app went trustingly up. Plus, it meant you could benefit from the wisdom of the entire crowd during the creation, not only after the fact. Now, engaging with those people takes a ton of time, right? It also means that suddenly you don't only have to explain what you want, but also why you want to do it. Because only if people understand why you want to make a certain change, they can make sensible suggestions to it. That means it will take slower, and if it feels overwhelming, likely someone is moving too fast. However, it's worth it. See, at so slower decisions, they lead to better results. You can get your entire organization up and running with this decision-making, and receive more buy-in, and make better decisions based on experience within various pieces in your organization. A lot of that, again, is based on asynchronous decision-making, because you can't get all of these people in one group. One example is when we came up with our inner-source principles at the overpace a couple of years ago. What I did was to create a draft version of those principles, and share them with the entire company, and ask people to provide feedback and to provide improvements. When those principles finally were published, that was a no-brainer. Everyone was aware of it already, and all of the clutches that otherwise would have resulted in pushback already had been ironed out. So what you do for that, again, is you enable asynchronous communication in order to prepare consensus. You don't get everyone in the same meeting room. You share your idea, and you make people participate. You leverage your soapbox. You do not only share solutions, but you also point towards issues and motivate people in order to think about those issues and solve some. You share your plans, not when they are already made, but you share them early, even if they are half-baked. Again, what that means is a lot of trust in your organization, and it also means that people are aware that mistakes are being made, and that mistakes are part of the learning process. What that means at the organization level is that you suddenly have something that you know from software projects already. You release early. You release often. You make small reversible steps, and that way avoid making big mistakes. So suddenly making mistakes becomes something small and something cheap, not a catastrophe anymore. You also inspire other volunteers to become active, because you share not a polished version, but something where you ask volunteers to provide their input. Now, a lot of these patterns we know already in the open-source world. We've started collecting them at the InnoSource comments, which is a group of people with open-source background who want to bring this kind of collaboration into the enterprise and to make companies adopt open-source collaboration principles. We are writing those down in the form of patterns. You probably know them from software engineering patterns. Other people know them from architecture patterns. So we do not only write down do-axe, but we do write down if you observe a certain situation under certain conditions, you can apply this kind of change, and this will lead to situations that are different in such a way. So typically people go there for the patterns and for the learning paths, but they stay for the community, even if they are changing organizations. Now there's a tiny little catch there, of course. InnoSource comments is just one step on the journey. The goal is to train more humans in open-source practices, because that means lowering the barrier to get involved upstream, and it means, hopefully, that those companies who are standing on the shoulders of giants in order to solve their business cases figure out how to help those giants, how to fix the issues that they have with these open-source projects that they depend upon, that they become active themselves, not only by submitting bug reports, but by becoming active and submitting patches as well. And with that, I would like to thank you for your attention, and I wish you a lot of fun at your remaining FrostCon. Thank you!
A virus caused a lot of teams to move to a remote first setting in the matter of a few days. Too often the result was teams emulating office settings with remote video conferencing. In this talk we will look closer at how Open Source projects succeed in handling highly distributed teams. We will look at patterns that can be applied inside of companies to make not only remote work more efficient. As a side effect we will see how adopting Open Source collaboration patterns on the inside will lower the barrier to contribute to regular, public Open Source projects.
10.5446/51742 (DOI)
Welcome to my talk on how to organize work in Corona times and what we can learn from open source communities. I hope you can all hear me fine, otherwise please just let me know in the chat or contact my moderator, Laiko. So let me get this started with a brief personal story. For the last 17 years, I have been in and out multiple free and open source community. I enjoyed collaborating with KDE, OpenSuser, all the projects and open usability like Open Office and GIMP, later MySQL, PostgreSQL, Plone and Triple. And now at Blockplus. I was always fascinated by the fact that free and open source collaboration works across language barriers and borders, highly distributed, fully remote. After a career in software development, organizing work in an efficient and effective way now has become my major focus as IOS chief operating officer. And I can make best use of what I learned in the past from open source communities. As not only are we developing free and open source software at IOS, we are also doing this with remote teams dispersed around the whole globe. In my talk, I want to share our experiences on how to organize work in an open source project becoming a for-profit company and how this helped us to now stay productive and healthy in a global pandemic. I will walk you through the topics of remote collaboration in open source communities and remote companies, what is similar and what is different. Then I want to share some examples from our hybrid approach at IOS pre-corona. And then as the last part of my talk, how we organized our remote work in the global pandemic that we are currently in. So let's get started with a look at how open source communities organize their work. You know that when software gets developed as an open source project driven by its own community of contributors, they need to make sure to involve any contributor irrespective of their location, time zone or working hours. To make this happen, the community needs to ensure that communication can happen in an asynchronous written fashion and that knowledge gets distributed in a written way as well. And everybody is aware that people work together in their spare time as volunteers, so no availability at specific times is expected. Communities share proper tooling for remote collaboration like distributed version control, code review and build tools, chat and also bug tracking tools. But even if the day-to-day focus is on remote actions, vibrant open source communities know that they need to get people together regularly to build trust and a sense of community. And this is why they meet yearly in conferences and also they offer local user groups. Now let's move over to how work is usually organized in companies. Let's start with the teams. When we're looking at varieties of remote work in companies, the first question is are the teams distributed or are they dispersed? Distributed means that each team sits together, on site. They are co-located on team level, but the other teams of the company may be located in another city or another country. Dispursed on the other hand means that the team itself is scattered all around the globe. Some might live in the same place or even work from the same office. But in a meeting, for example, everybody will dial in from a different location. They don't share a physical team space. Then, on the individual level, people can live remote work in different ways. Some might only work remotely. Some may switch between working from home and the office. And besides these two, there might even be more locations to rotate between like cafes, co-working spaces, or even parks. As a remote worker, you even might want to take this flexibility one step further and become a digital nomad. And then on the company level, again, a multitude of varieties regarding the importance of remote work in your organization. Depending on whether the needs of your distributed teams and remote workers come first, or whether your organization optimizes the workplace experience for the needs of the co-located on-site staff. Whichever pattern your organization chooses, each comes with its own challenges and benefits. So let's look a bit more into these. Remote work enables companies to support their growth by hiring the best talent on the worldwide market. It provides for an international, diverse workforce. And it provides their staff with the means to work from everywhere and as well to keep a healthy family life. It supports disabled and sick people who want to work but can't commute. And the lack of commuting as well as occupied office space is also pretty eco-friendly. On the other hand, remoteness can mean that individuals become remote to their teams. Building trust and psychological safety, which are key for a high-performing team, is way harder than when you share the same physical space. Digital overlap and time zones and crappy internet connection turn conference calls into torture. And all of this may lead to the fact that it keeps people from regularly interacting with their teams. Still, keeping these challenges in mind, there can be several reasons why a company wants their workforce distributed or dispersed. They want to tap into a global talent pool. Probably they are looking for lower salaries that they can achieve by outsourcing. They might have passed through a merger or acquisition and now have a whole team sitting in another location. It may also be a way to have the staff closer to the customers if they are distributed internationally. Also, it supports diverse teams, which is pretty important because they tend to be in the end more creative and more performant. So if you have different cultural backgrounds in an international team, this might lead to this result. And overall, offering remote work and home office can lead to a way more happy workforce because it provides for more flexibility and the means to achieve a healthy work-life balance. So if you want to move into that direction with your company and ensure remote work is working for you, to get remote work rights in a company, a well-defined remote strategy is key. And like with building any kind of strategy, it all starts with a company's business goals. Then you need to have a good understanding of which culture you need to achieve those goals. And then, can you nurture this culture? Can you achieve these goals easier in an on-site or in a remote setup? After having answered that question, a sound remote strategy ensures that the people you employ, the processes you defined, and the technologies you apply all support you in achieving these goals. And with that, a remote strategy will encompass, for example, a hiring strategy, how to hire for best remote fitness, an onboarding strategy. It will need to care for proper remote education and training. There will be communication policies set up to ensure that communication, which is so important in a remote setup, happens in a proper fashion. A remote strategy also needs to care for team building, which is way harder in a remote setup but even more needed. Tooling is very important as part of the remote strategy, which are the best tools to support your teams in communication and collaboration. And you also need to keep in mind the aspect of privacy and security in that case. And then there is remote facilitation. So how do you want remote meetings to happen? Do you want remote meetings to happen at all? And in which fashion? And are people able to conduct them in that way? And last but not least, which kind of workplace experience do you want to offer? Both for on-site people but definitely also for your remote people. So before going more into detail here in a remote strategy, by providing you a concrete example, let me now first quickly circle back to what we can learn from open source communities while building our company remote strategy. So looking for example at hiring, open source communities in general welcome people from all around the globe. For onboarding, they provide written documentation and they require contributors to really before answering the same questions all over and over. They educate about how to properly contribute by for example a heavy use of code review and also by adding code sprints as a co-locating feature. They ensure decent and constructive communication by providing and enforcing for example a net ticket or a code of conduct. They build trust in the community by regularly meeting in person on conferences or in local user groups. And they provide ample tooling for remote collaboration like as I said, the distribution is version control systems mailing this phones, wikis and so on. And this scenario remote facilitation of meetings is not so much a topic they need to address as they rely mainly on asynchronous written communication. So to make all of these topics around remote strategy more tangible, let me now share some of our experiences from remote work at IO first pre-corona. So I don't see you but I now imagine the question marks in your eyes. So please a quick virtual raise of hands. Who of you is using an ad blocker in their browsers? If you raised you most possibly will know our most popular product. Adblock plus is not only the most used ad blocker worldwide, it is also the most popular browser extension. And for sure it is free software. But our mission at IO goes far beyond ad blocking. By building monetizing and distributing ad blocking technology, we create a sustainable ecosystem disrupting the biggest market on the internet, which is online advertising. But I wanted to speak about our remote culture. And for that, you first need to understand our special situation. We not only have distributed teams, now every single team is dispersed worldwide. Additionally, you find us in three office locations and in 26 remote locations, which makes all of that a quite balanced hybrid model. And even the people who are close to our offices, nearly all of them enjoy home office at least one day per week. Now let's look at remote at IO and what came with our what I call the open source heritage. Now IO builds on the success of an open source ad blocker that was created by software developer Vladimir Palant in 2006. When Vladimir founded IO in 2011 with Tim Schumacher, this open source backdrop defined the remote work culture in the company for many years. As explained in the beginning, developing software as an open source project requires the ability to involve any contributor irrespective of their location, time zone or working hours. And this meant that we needed to provide support for public asynchronous written documentation and communication like we have a public IRC channel. We are providing our code and issue tracking on GitLab and GitHub, our documentation you can find on the adblockplus.org website. We also support public work tracking like mentioned in GitLab. And we supported very individual preferences regarding working hours and availability, treating our employees in these days more like contributors to an open source project. And this perspective was the one defining most of IO's remote culture in the past. While establishing IO as a business, we learned that some of these cultural traits do not work too well for a for-profit company. So there are a couple of things that provide some conflict areas, but let me share some examples related to remote work. For example, that tooling needs to be publicly accessible means that we, for example, need to make sure that topics that underlie business confidentiality still can remain confidential. And we are limited in the tools that we can use. So this provided a bit of friction already. Then this asynchronous working style where people choose when to work, it led to long waiting times between different process steps. It was very hard to implement standard agile practices as they often favor co-located teams or synchronous time together. So we found a lack of real-time techniques like the, for example, the daily stand-up or pair programming quite hurtful. And what also came out of that situation was a lack of trust and a lack of team coherence due to a general lack of real-time face-to-face interactions and also a lack of ability to give direct face-to-face feedback. Anyways, we wanted to stick to providing remote work because it definitely helps us to attract and retain talent and to ensure diversity. There are remote strategy nowadays built on four building blocks, ensuring remote fitness, providing best tooling, creating an amazing workplace experience independent of whether you are remote or on site, and ensure proper trust building both in the teams as well as across company. Let me show you quickly a couple of examples for these. So ensuring remote fitness is already a topic in hiring where we try to find the proper candidates who understand what it means to work in a remote setting and are able to collaborate and communicate in a fitting fashion. And it also requires ongoing training and training resources. What you can see on this slide, for example, is one of our ways where we educate each other how to properly set up your situation in front of a webcam and the whole situation when you're having a video call. Tooling definitely affects our software and hardware for video calls, what we offer for chats, how we track our work and document our results, and also how we can visualize our thinking processes. What you see here on the slide, for example, is the COCO, one of the tools that teams use to create something like a virtual office space. Then trust building is a major topic both in the teams and across the company. So for the teams, we ensure that our agile coaches support regular team building and team cohesion, that they are able to provide appropriate remote facilitation in the meetings, and that they ensure that the team work is not impeded by the dispersed setup. One of the means that we use, for example, are what you see on the slide here, that our retrospectives work also in a dispersed setup. We make heavy use of tools like Google draw and also different ways for running remote retrospectives so that people can make use of these tools for continuous improvement, even if they don't share the same physical space. Still, we understand how important that is to get together once in a while. This is why in the past before Corona, we made sure that each team meets three times a year in person to work together and also enjoy fun activities and team building activities. Additionally, once a year, the whole company gets together for an offsite where we travel together to some nice place to have workshops, give presentations, but also just enjoy the time together. Still, this year, we additionally started to experiment with cross-company remote events to make sure that we are completely inclusive and can also have something additionally to these offsites which works in our remote setup. In general, we started with a remote open space which worked amazingly well. Then the fourth pillar of our remote strategy is the overall workplace experience. This starts with just simple things like providing every employee with a high-quality headset that can make a big difference. Also employees working remotely all of the time, they get an allowance for an economical desk and also a good chair. Now, as the last part of my talk today, let's look at what changed at IO when Corona hit us. In a nutshell, we were well set up for remote work as you could see and what I just shared before, but definitely not at that scale. We had to learn and to adapt a lot. Let me briefly share how it all started for us. Already end of September, sorry, end of February, one of our colleagues had an encounter with an infected person and she notified management of that. When we learned it, we immediately closed down our colon office where she had been the week before and had plenty of encounters with the people working there. By sending everybody home in quarantine, we made sure that the virus, if it had already started to spread, couldn't spread anymore. In the end, this colleague was not infected, but it was a safety precaution we took. Then you know that moving forward in March, there was a general lockdown in Germany. We were not enforced to keep our offices closed, but we did so nevertheless. We closed the office in Berlin and in Cologne and also the one in Malmo in Sweden. We issued a general travel ban. This was because it does not hurt our operations too much if people work from home. As mentioned before, we had the setup already and we were able to keep our productivity up even if we closed down our offices. We took the safety measure that we were just able to do to make sure that really each and everybody in the company keeps safe and sound at home. Then the spread not only in Germany, but also across all of the countries where our colleagues are working from. We saw not only lockdowns, but also real curve use. Then the schools and kindergartens closed. Overall at IO, 12 people had to quarantine themselves, but in the end, luckily only three turned out positive and they are well again. I think we have been quite lucky overall, which is also due to reacting so quickly in the beginning. For sure we had to cancel all our planned in-person company and team events, which hit us quite heavily because we had a great off-site plan in a beautiful castle in Germany. As said, for our crisis management, the overall goal was that we wanted to keep everybody safe and that we wanted to keep productivity up. To make this happen, we went through a couple of phases that I want to explain now. We first set up a Corona task force from several areas in the company like office management, operations, workplace experience, people ops, security and privacy and a couple more. We then set up proper communication information channels. Next, we made sure that everybody is set up for working in that new normal. We also created emergency plans for the case that one of upper management would get sick from Corona so that we were sure what to do and where all of the resources were to be found. Then, with all of this in place, we just iterated, learned and improved. Let's now deep dive into some of these steps and phases and let me share some of our best practices and learnings. I think the most important thing at the very beginning was to keep everybody in the company informed, involved and engaged. The whole thing started with me sending out an email explaining the situation with this one colleague in quarantine and that we closed down all of our offices. A couple of mailings followed but we also made sure that we had a couple of information on our internet like weekly updates. We created what you see here on that slide, this beautiful Corona news hub where people could find any information around the whole topic, including an FAQ that addressed each and every question that came up during the weeks. We iterated on that in our all-hands presentations that happen each and every week when the whole company gets together in a joint video call. I think the most important thing to keep everything together was our Corona virus chat channel where people could vent their worries and share information and upload millions of memes and just try to keep close together in that hard times that we had, specifically around March and April. The next thing we needed to make sure that everybody is able to adapt to and to work under these new circumstances. We set up a whole list of measurements around that. We started with offering two additional leave days to each and everybody. We called them Corona Community Days because we wanted to make sure that people use them either for setting up themselves in this new situation like shopping groceries or getting the home office properly set up, but also to care for parents or neighbors, shop for them, make sure that their homeschooling for the kids is properly set up and these things. Two additional days for caring for us. Then we closed the offices, but they were not contaminated or something like that. People could get into the offices and get their equipment from them, like get their chairs for home or their monitor or anything that they needed so that we could make sure that the people who normally worked mostly from the office had a good equipment now at home as well. Still, if this was not sufficient, we offered 100 euro extra office allowance for each and everybody to, for example, buy a better desk or whatever because people couldn't move our desks from the office. We also looked into making sure that all the tools that we normally use for our remote collaboration and communication that they were able to properly scale. I think the most important topic here was that we set up Corona Family Care because as soon as the schools and the kindergartens closed, it became clear immediately that people were not able to work like they were before. We asked our staff to provide us with 100% productivity and focus when they were able to work, but to make sure that they focused the same level on their kids when they needed to care for them. This meant that people could and still can register for Corona Family Care, which means that they just enter in our HR tool that they need to care for their kids on that day and these times they share this information with their teams and then they're good to go. This means that they can work less time without any pay cut. I think this was the major relief to all of the families that were affected by Kitas and schools being closed. Next asset we ramped up our remote tooling. We were set up quite well before Corona hits, but we needed now to get way better and ensure scalability and stability of all our IT operations. We made sure that our video call infrastructure was able to support more connections in parallel that we set up a 24-7 help desk. We switched chat tools so that we could use a more feature, how to say, that we just use a better one than the one we had before, so we switched the matter most. We added to our tool box a whiteboard and flip chat tool, which is called Mural. Also, we added a couple of games and I will come to this a bit later. For all of these new and additional tooling that we set up, we also wanted to make sure that our privacy standards were kept up. For example, Zoom were completely out of the discussion for us, but also for all of the other tools that were now added to the bundle, our security and privacy team had to audit to make sure that they comply with our regulations. What you see on that slide here and that graph is that as soon as our COVID-19 measures started and the team started to add more tools, the amount of requests, our security and privacy team for audits grew beyond their means for answering them. Let's then look a bit more into team collaboration. Even if before, our teams were already used to collaborate in a dispersed and remote-friendly setup, everybody being remote was still in use situation. Some were not used to it at all and they highly appreciated the tips they got from their always remote colleagues. Additionally, teams had to create arrangements to cover for the colleagues that had to care for their kids. What we added to our mix of working together were daily in-person stand-ups before we had a mix of in-person and ASUQ, but we wanted to make sure that everybody sees each other at least once a day. We asked the teams to create written team agreements so that everybody knows when each team member is available, taking into respect the times that are needed for family care and homeschooling. Then we provided remote facilitation trainings and I want to highlight here and this is why I put this book cover on this slide. We are having two amazing agile coaches in our company, Kirsten and Jay, and they also shared their knowledge in two of the talks that were given in the past days. If you missed them, make sure to re-watch them in the recordings because they, not only in that book, but also in the trainings that they gave in the company, they shared their amazing knowledge and how to make remote meetings work, which is so important because you all know with video call fatigue and people not speaking up, crappy connections and so many things that keep people from interacting in a remote meeting. We also use these techniques in our remote team days. Even if we couldn't allow teams getting together anymore like we did before when we asked them to get together in person three times a year, we still wanted to make sure that they spent some time for team building or fun activities. The agile coaches also facilitated remote team days, which were better than nothing, but still not the real thing because people could not really meet. You may also have experienced that after a full day of being in front of the monitor and having remote meetings, you don't enjoy a virtual drink with your colleague that much because it wins another hour in front of the screen. But still we tried to make it as pleasant as an experience as possible. After we now had covered our basis with all of the measures that are lined out before, we knew that this now was fine but not sufficient. Because it is not only on team level, we also wanted to make sure to keep us all together as a company. There were a couple of measures that helped with that, like the coronavirus chat channel that I mentioned before. Also a couple of colleagues, they started to blog about their experiences in lockdown or curfew in the different countries that they are at. You can also find a couple of them publicly on our company website. We introduced an always-on video call where people that were just missing their colleagues could tap into and have a chat. Once a week we joined for virtual drinks. In lunch break, we had online lunches together and added some games. We also added what we call a happy hour for serendipitous meetings. It is like a chat roulette where you get mixed together with a random colleague and you either can chat for five minutes about whatever you would like or they are also preprepared questions. It is quite fun and you really learn a lot about your colleagues. Then we had to make sure that we replace our cross-company summer week with a remote event. Also this is what we just did last week. We created a fully remote hackathon and innovation sham. None of this really helped but it only got us so far because really no remote event can fully replace getting together at a company summer off-site at some amazing castle. Let me now share some of our learnings. The first one pretty early was people don't forget vacation and ensure that people take them because we saw that people in the beginning of the year expected that the travel bands would get raised before the end of the year so they postponed their vacations and they got tired and performance in the team started to drop. Also the expectation then would have been that as soon as the travel band is raised everybody goes on vacation and the team is not able to work anymore at all. So we educated each other about that a bit. Then you might have seen that funny mock-up of the Gardner hype cycle for emerging current times. It just was quite helpful for us to think about how to keep a close eye on the mood and pulse of the company. For us we all went through something like this mood curve here. At the beginning everybody was quite aware there's this pandemic we need to stay at home and then we made sure to set up us all properly. And then we had that peak where everybody was quite engaged. We can do it. We are all in this together. And then we got into that big valley of disillusionment where people were stressed with homeschooling with video call fatigue and just getting depressed by the long isolation. Still and I think this is the phase where we are currently in a bit is that we are ramping, we are getting used to it, we are adapting. And the big question mark here is will we reach a plateau of productivity? It will not be the same as in normal circumstances. But will this happen or will there be a major loss of productivity due to people not being able to stand the situation anymore? Because overall with schools closed for weeks and lockdown extending into months and offices still closed, people started to ask themselves and also us on management level, when is this going to end? So we are all in this together, but this also means that we are all getting completely stressed out at the same time. And this also meant that people were really struggling to achieve the goals that we set in the beginning of the year, while at the same time either having to care for their kids at home or covering for the colleagues that did so. So this was where we needed to acknowledge that we cannot change the situation, we cannot make it end. There is a new normal which is way less comfortable and enjoyable than the old normal. And we can only try to make it tolerable for everybody and accept that we might not be able to achieve what we set out in the beginning of the year. And these principles that you see on the slide here, they were not set up by us, they came out somewhere from the nets, but we discussed them a lot because they highlighted quite well what you can expect under these circumstances and what you can't. So then to support our people under these circumstances at IO, we ramped up our mental health support. So even before we had yoga in the offices and we offered meditation, but then now we moved to online yoga twice a week and online meditation on a daily basis. We had mindful leadership trainings to make sure that our leaders are compassionate and also get trainings on real mental health issues. So we involved a psychotherapist for trainings and we now offer the tool Insta help to our staff which provides access to a psychotherapist for a couple of sessions and is free for our staff. We also made sure I mentioned vacation before that people could keep their vacation days from the years before and did not lose them. So normally you lose I think more than three or five. It is normal you lose them after March, but we allowed to keep them so that people take enough vacation and get the rest they need. But the major part here was that we needed in the stuff that we wanted to achieve throughout the year, we needed to scope and to deparallelize what we wanted to achieve in terms of our strategic goals throughout the year and we discussed this quite diligently. And lately we are now discussing whether we should reopen the offices because as in the past keeping them closed worked best for achieving our goals of keeping everybody safe and keeping up the productivity. But now after nearly half a year we see health and productivity declining and not due to Corona but due to offices closed and the travel ban and thus we now need to reconsider. So we have an entry plan in our draws already since April, we set up all of the things that we would need to do to get people back into the office and based on that amount of measures to take we put it away again and there is no issue with having the offices closed for quite some time longer. So let's just keep it at that. And as such now with the mood changing we need to see what we need to do to make sure that at least more people than currently can use the offices again. So we involved our medical and our workplace security advisor and they supported us with the proper risk assessment of what we need to do and how we need to involve the whole company. But I think the most interesting and most complex question of this overall Corona reentry plan to the offices is not which rules do we need to take into account but how to make people comply with these rules. And I said we had this hackathon and innovation sham last week and we made it a project for that innovation sham. And a couple of teams worked on that and they came up with a couple of great ideas on how to enforce rules but also how to incentivize people to comply with rules. But the most important outcome of that innovation sham was that the part of educating people why the rules are important that this needs to be focused the most. So what's next for us. The first thing is that in August there will be a pilot for in-person team days. So our security and privacy team which is the most thorough and diligent in these terms of making sure that rules are really complied with. They created a great framework on how to get together in person while still making sure that people don't infect each other including voluntary Corona tests up front. Then in September we are going to decide whether we want to allow people back into the office. We will also provide influenza vaccinations to our staff to make sure that at least with that we can probably ensure that the immune system is kept on a good level. Each and every year we are running our own ad blocker developer summit which this year we will move to a remote setup like this remote frostcon as well. And we will consider whether internally we are going to have another cross company remote event. We need to balance whether these cross company interactions are valuable or whether we are adding to that video call fatigue that we are facing. Last but not least in December we planned our Christmas party. We don't know yet could we build this happen? Can this be in person? So what we learned in the last three months overall is that we just can't plan for something like that. Things will change quickly and at the moment we can still hope that probably we can meet in person at least at the end of the year but probably it will move to next year as well. So I am wrapping up with a very quick overview on what we changed here between old normal and now the current COVID-19 times. I won't go through all of the details here because I am a bit late in my timing but feel free to address these topics in the Q&A afterwards. So let me end with three very quick major takeaways that I would like to share with you. So first one, working remotely under normal circumstances is totally different from working remotely during a pandemic. Our second takeaway is that both as a leader as well as whatever level or experience you have in the company it is so important to be approachable, to share which is your stories to build a compassion and stay kind because you all share that same stress level. And third really, nothing beats an in-person encounter. We really tried it, we tried so many options of emulating in-person encounters with remote means and you get so far but in the end nothing beats an in-person encounter. And with that I am done with my talk, I am looking forward to your questions now and also as that because nothing beats an in-person encounter I am really looking forward to hopefully meeting you again in person next year and next year at Frostconn. Thanks everybody. Please let the questions come in either spoken or in the chat. I see. So there was one question, are there any typical mistakes that should be avoided? And I think that our major mistake was that we thought we could just move everything we did together on site or when we did cross company events or team events and that we could emulate them remotely if we just did it really well and just wanted it enough but it didn't work out. So we even considered remote beer pong for one of our remote events. We let that drop in the end but it is as mentioned you can only get so far and you will really stress out people with offering too much and with involving them too much. So if you offer some team building experience in a non-remote setting it will be an enjoyable joint experience. If you do the same thing with everybody at home in front of their monitors it means that there are kids that people have to take care of in parallel. They will have spent eight hours already in front of the screens before that event and so on. And you need to take a lot more into account that the people are spread over different time zones. So like when we had our first remote event we really tried to make it work for all people in Asia as well as for our people in the US. Meaning that I for example started my day at 7 with a breakfast and ended it at midnight with having virtual drinks with the people which is totally okay if you are on an off site but if you spend that time in front of your monitor it gets really stressful. Then there is another question or comment here. One of the most interesting things that has happened with the remote conferences, remote working is that people have been able to collaborate from different parts of the world. When we all get back to the normal how do you think we can still accommodate this thing which is geographically quite challenging. That means we will have to leave out people who have joined in which would be sad. I think for, I am not quite sure that I agree with you here because to an on site conference be it a public one or a company event you can invite people. People can travel there. You spend the time in the same time zone. When you have a remote conference you need to make sure that everybody can join in their specific time zone. It needs to be a 24 hour thing which again means that you don't have the interaction between each and every participant that you would have when everybody would be in the same time zone. As the question is floating and I am going to move forward how to onboard new team members best especially young ones which have to learn basics. We went with our normal approach like providing them with mentors but making sure that these mentors are well versed in remote tooling and remote facilitation techniques and also make sure that these new team members get the proper onboarding trainings as soon as possible. To explain how we normally do onboarding this is that even for remote workers we get everybody to cologne for two weeks and this means that they can mingle in between themselves but also with the rest of the on site staff and now this was not happening anymore. But we hired 50 new people to IO since the beginning of the year so we did a lot of remote onboarding. It worked quite well so the people I asked afterwards they told me that the experience was really nice and we also improved over time based on what we learned in the beginning. For example that is also important even in a remote setup that the group of new starters can have their joint chat channel or video channels that they mingle as well and share their first time experience as well. Next question you talked about team agreements beforehand. What did these contain besides the availability hours of the team members? Additionally, it was also about how people can be addressed best. Would you like to receive an email or would you like to be pinged on our chat tool? Do you want to have your discussion in one chair or ticket for examples? So it was a combination of how the team wants to work together in these days but also how like a manual to me how can you work with me the best in these days. Getting into account childcare for example.
Open Source communities organize themselves for remote, distributed, and asynchronous work. Companies predominantly prefer colocated and synchronous collaboration. Keeping offices closed due to Corona, provides a challenge - and opportunity - to combine best of both worlds. Open Source communities naturally organize themselves for remote, distributed, and asynchronous work. Companies, on the other hand, predominantly prefer colocated and synchronous collaboration. Keeping offices closed due to Corona, provides a challenge - and opportunity - to combine best of both worlds. At eyeo, we create the Open Source software Adblock Plus. Organizing work as an Open Source community is deeply ingrained in our company's DNA. Still, as a company, this is not necessarily the best way to collaborate. In the last years we became more agile, more synchronous, and more team-focused. We try to balance both worlds. With nearly half of our staff working from home, distributed world-wide, we continuously improve remote collaboration. At the same time, we strongly invest in frequent get-togethers and colocated team building. With Coronavirus hitting Europe, we were very early to close all offices and ban all business travel. With our rich experience in organizing remote work, the crisis hit us by far less than others. My talk will cover our best practices in combining remote work with an agile mindset, and how we addressed Corona's challenges regarding international collaboration during crisis, supporting staff tasked with homeschooling and child care, and overcoming mental health issues. --- Jutta Horstmann is eyeo's Chief Operating Officer (COO). In this role she is responsible for eyeo's continuous improvement as an organization. She leads the company's Coronavirus Task Force and is responsible for its crisis management.
10.5446/51760 (DOI)
Welcome to my talk, fast or not too fast, visible and invisible benefits of the Sovereign paradigm. My name is Badi. I'm from Bonn. And generally, we do this talk with my colleague, Christian Banes, all the previous versions. Currently, I'm alone, but he has contributed a lot to this talk. So some words about me. I'm very active in the serverless and Java communities. So I'm co-founder of a serverless Bonn and Java Bonn meetup and also expert in AWS. So we will talk about serverless in general, but there will be some specific aspects also for LWS. I'm not the expert in Azure and Google Cloud, but I think many things are similar there. So this is the presentation at Frostcon. For me, a new year, a new challenge this time in English for the first time, but I think I will get it right. So I'm working for the company called IP Labs in Bonn. So we are only several kilometers away from Santa Augustine, where Frostcon usually takes place. And we are we providing software for designing and ordering of the photo products, like calendar, photo book, print, posters, everything where you can print your photos. We were founded 2004. And since 12 years, I think we are 100% subsidiary of the Fujifilm Europe. So that's probably it for presentation of the company and myself. And we can get started and we will start with the value proposition of the serverless. So what is the total cost of ownership of the serverless paradigm? So we will draw the full picture and we will start, first of all, you don't manage infrastructure, operation, and maintenance. And simply ask yourself, is infrastructure maintenance your core competency or not? If you are the cloud provider like LWS or data center provider, then probably yes. Otherwise, ask yourself, do you want to spend your time and effort for the infrastructure, operation, maintenance? And probably people know what doesn't mean in the microservices world and so on. Other thing is also scaling and fault tolerance built in. So the question is, you should ask yourself, can you get your capacity plan right if you have huge spikes and so on? Or do you want to solve the hard problem of fault tolerance by yourself? So designing such a system is really a challenge, as might use you. So another thing you probably, with the serverless paradigm, you need fewer engineers. What doesn't mean? So you rely heavily on the managed services, and that's why you simply you can tie things together, glue them together. So you need fewer engineers to start implementing and start validating your idea. And probably you can do more with the same amount of people. So it's probably the core message. You can do more with the same people if you focus on the right things. And also, you will have less code written. Probably the same reason relying on the managed services. But I really like this quote from Paul Johnson, who is very active in the serverless community. Whatever code you write today, it's always tomorrow technical debt. So another quote, the best quote is no code written at all, but also think of configuration as also a code. Because you have default values. You have to change values. And that's also part of the code. And less code also mean low technical debt. So you don't know too much code. And time and effort required for maintaining your solution over the whole life cycle will be much more than developing. So probably 75% of the time we spend maintaining the solution which we have developed some time ago. And it's really huge time and effort because we think of our software as a product, not the project, like done and finished. And that's why it's really huge game. So try to minimize the code written that will save you time in the future. And also, it's about focusing on the business value and innovation because you free your time for doing things which matter for your company. And probably every organization wants exactly these. Be innovative, deliver business value, and so on. And probably as a result, you have faster time to market because it's currently key differentiator in the today's business. And also ask yourself, what is the score for your business and what you can get as a commodity or utility or software as a service? We'll talk also about this in the next slides. So fast or not too fast, it's also the question that there is no silver bullet for all those things. And what I did, I have prepared some kind of decision checklist for you to decide to go with serverless or not. There are a lot of components which can influence your decision. And many of them are specific to your company, which I don't know. But we will go through some major factors which can contribute to your decision. So the first thing is application lifecycle. And what you have to understand is probably many of application iron one of the two phases, explore and exploit. And explore phase, it's where you validate your hypothesis quickly. You want to experiment rapidly, and you want to run your experiments as cheaply as possible. And serverless is a perfect thing for that. You don't have to purchase the infrastructure. You don't have to do commitments for one or three years. And you also probably don't have too many users. So serverless is simply the cheapest way you can throw things out or architect your application. It's some kind of dream field world. And it's really a perfect thing. But of course, if you have built something which already provides customer value, the next step is to scale. To build this on scale and to build a profitable product around this. And probably there is no right or wrong answer if the serverless is the right paradigm. But you will probably do partially serverless and partially not serverless architectures there. And the things you have to ask yourself is how much of the stack do you want to own on your site to deliver the business value? And also, what do you want to outsource? And by outsourcing, I also mean service level agreements, regulatory compliance, price, roadmap to your service provider. And it's also very difficult to say what's the right thing to do because you can get the really good product as soft as a service, but you don't control the roadmap. So if you are the small company, you rely on your provider to deliver the features you may want. And it can become quite challenging to wait. So sometimes you have to decide, OK, I don't want to own some component. But it's not my core competency. But I want to be faster there. I want to fulfill the needs of my business and I don't want to wait. So ask yourself, what do you want to own? What do you want to outsource? Where should spend your time, effort, and focus? And also, yeah, of course, we are in the world. We have existing application. And you can't magically move all them to your service provider, the cloud provider, especially for serverless because it requires architecture. But you can try to modernize part of them. And probably many of you know that some kind of strangler pattern, which was coined by Martin Fauer probably 17 years ago, that you can scale with the application, deliver and features becomes painful. You can scale your organization and so on. So you try to put some proxy in front of the whole application and try to delegate new features through this proxy and implement some kind of other architectural paradigm. And in the serverless world, you can use API gateway to rely on new architectures and also application load patterns that currently allows you to call lambdas in the serverless world and try to do things differently from some point, from some starting point. In the serverless world, it's also the FinDev concept because what serverless allows you is to figure out how much the feature costs you and also in terms of spending in the cloud for all your services. But you can also measure the business value. So I offer this feature as a premium feature on how much do I earn with this feature. And so I can compare the things and decide probably not to offer something or not to invest my time and effort. And there are really big software as a serverless tools like Lumigio who provide this detail to you on the spending in the serverless architecture on their feature base. And it's also one of the decision points because you can do pricing on value-based pricing. So depending on the usage and not on the subscription model, so this is a huge opportunity in the serverless world to pay as you go model for the birth site, the birth site, software as a service and the customers. And other things and it will be a bit more technical so you have to understand your workloads. And by distending your workloads, I mean the architecture and the style of your application and so on. And probably event-driven architecture is the perfect fit for the serverless because many things are like events where you react on and play some business logic and it's really a perfect fit. The same is probably true for API event-driven REST full application because you can provide a full API gateway in front of these and do all kinds of things. The same is probably true with the web socket. It can become expensive with the API gateway with the price model. So you have to look if you have millions of requests per hour. It can become really pricey. But generally from the architectural point of view, it's a good fit. Another thing is a back job, some kind of a synchronous job which will be triggered in some specific point in time. And I think it's also a good fit. At the API labs, we have a lot of batch applications and back jobs like payment capturing, like email, like deleting the same project because currently there are some days spent and the customer have also purchased. So they have to delete them. And initially we package them in one application. So all the batch jobs and have the issues that most of the jobs had to run at the midnight. And there is a danger that you run into the scaling issues. Even somebody has the job to transit two o'clock on the night and then changes it to run also the midnight. So probably the capacity sitting there won't be able to execute all those jobs in parallel. And if you do serverless application, you can scale all those jobs individually so they don't interfere and so on. That's a really huge advantage. Also the internal tool, the same thing. The internal tool doesn't require run 24-7. And so the serverless with the price model is really a perfect fit for there. But also there are other machine learning, artificial intelligence, big data, and so on. And I see many use cases coming. But in general, I can say that it's currently the way to go. But LWRS releases, currently frozen again. Do you see my presentation? OK. Yes. We can see the last slide. The last slide, it's became frozen again. So I have some time to go out. OK, now do you see? Or don't see any? You need to restart the screen sharing. OK, that's the same situation that we had. That it became frozen and I have to restart. Probably it will be the issue. In share starting? Yeah, we can see the slide now. OK, so we try to go forward and see. And then I will probably start again. So for machine learning and artificial intelligence, LWRS released lambda layers where you can provide your own runtime and you can pack your favorite machine learning tools like tensor flow and so on. So with these features, it became available. Also, recently LWRS provided the possibility to share the Elastic File System, like the storage, the file to the lambda. And so you also can install your machine learning tools or programs, which isn't possible in S3 because it's some kind of a bucket. So you can't run the programs there. So LWRS recognized that there is a lot of space to improve and they release the features which will enable all these use cases in the future probably, or many of them. But you also have to understand, do you need some kind of specialized hardware? For example, do you need GPU access required? And currently lambda doesn't support GPU hardware, but it's probably not the use case because GPU is not very quick and lambda is for low latency applications. So it's probably even not your use case. Also, if you write lambdas as function as a service, you choose the memory and the CPU will be given proportional to the memory. But sometimes you have the application which doesn't, which don't need this RAM to CPU ratio. You need more RAM and low CPU and so on. And it can become pricey if you use serverless. So try to think, is the CPU bound, is this RAM bound, is this network bound, and so on. And also, do we need constantly high performance? Because sometimes you have applications where you have response time below 100 milliseconds like beating and gaming platforms. And you can run into issues with serverless because of cold start, so that because of a bit more latency, if you glue services together like API, Gateway, Dynamo, DB, you have to think heavily asynchronously to make this possible. So that's probably not very good fit if you really need constantly high performance. Also, the question you have to ask yourself, do you need high throughput? Because lambda network bandwidth is really limited, and it's order of magnitude lower than a single modern SSD. And it's shared between all functions packed on the same VM. So the throughput will be too high and can also limit you. Another thing also, do your function need to communicate with each other, or do we want to decouple them with the queue or API? But the functions are not directly network accessible generally. So they must communicate via some intermediary service. There you have to serialize and then deserialize things. So it costs time, it costs throughput. So generally, also the thing you have to think of. So another area that you have to understand is platform limitation. And we will go through LWS examples. And the one thing that everybody mentions is the cold start. And then you have the situation, the cold start, with the virtual private network, and also without. So generally speaking, you will have all those cold starts if your container was released. And then your function runs for the first time, or for the first time you have the huge peak with the many functions called in parallel, like newsletter was sent, and there are no free containers that have to be started. And you will have some kind of cold start there. The things that have to be initialized. Also your execution environment has to be initialized. So you will experience some penalty. And for those use cases, in general, C-sharp and Java as a language suffer from this because they generally start slowly. WS released a provision concurrency there. You have then you can reserve capacity, which will stay. Of course, you have to pay for the capacity. But for use cases, if you know where to send the newsletter out, and probably many people will click on this at 9 o'clock and so on, sometimes better to reserve capacity for not to run into issues. But also for languages like Java, there are also some other strategies to deal with this. People who know Graal.vm see that there is a substrate vm with a head of time compiler, which provides native image. And this native image enables quicker time and also low memory footprint. There are a lot of frameworks in the Java space, like micro and out, Spark or sprint boot, which offer support for Graal.vm and native image. And that may be the way to go because Java and Sorrel is not really friends until now. But of course, people are working on this as Java is still one of the most popular programming languages. Also the situation with Lambda and VPC, and sometimes you are forced to put Lambda and VPC depending on the services you use. For example, if you use relational database service or you use elastic cache and so on, then these services require to be behind the virtual privacy network. And in order for Lambda to communicate with these, you also have to put Lambda behind the VPC. And the way it worked previously, it saw that if you are execution environment scale, so you need more Lambda slum in parallel, the more network interfaces were created and attached to that. And the number of interfaces was the amount of components. How many can run in parallel that was the fact of this. And it cost many issues in LWS. You had to manage the IP address space on your own. You could reach a cloud level network interface limits. And you also could hit the API rate limit on creating new network interfaces. In that case, you experienced the error. And you couldn't do anything about it besides to retry. And also, Lambda behind the VPC, it could, in the case of false start, it added up to 10 seconds to the execution time. And it became really impossible to use Lambda with the relational database and so on because of all these issues. And last year, LWS released the update. So the network interface will only be created and Lambda is created. So when you first time create or update Lambda or VPC setting, then this network interface will be created. And it reduces the cold start really massively from 10 seconds to behind one second. And this improvement is all there and all the regions and all the availability zones. So it's currently possible to use Lambda with the services behind the private network. And also, you have the choice when you use database. Most relational database you have to put behind virtual private network. There is another possibility to use Aurora Sorvaless, LWS native, relational database. But you don't, but you use this in the way, like not managing the connection pool, but using this via HTTP. And there is the option API, which enables you to use this. And the connection will be managed for you. And it's still in beta. Even one year later, it's still in beta for most grays. But other use cases still, there is a lot of latency and that's why it's probably not in the data. But generally, don't be scared of the cold start because you have them also in the container world. So if you need, you use Docker Kubernetes as a secret access station tool, doesn't matter. Probably if you require another server or another instance, then it also takes up to 30 seconds or even minutes to start this instance. So you have even a huge bigger cold start. And you wait to avoid it, you scale earlier. So if you see CPU is at 40% and I scale up. And that way, you overpay or over provision. So it's probably sometimes can become really, really pricey. So you have all these issues also in the container world and scaling by function is simple. You think different than when you do this. But generally, cold start, it could be an issue, but it doesn't really matter if you make the call asynchronously. So for many use cases, it's not even the issue. And you have one second edit in case of cold start, it's also not the issue because NWS saves the containers for the long time. They don't release the exact numbers, but probably one hour is a good feat. So one user will experience cold start after one hour. It's not really a huge deal, I think. Or you can use reserve provision conferencing. So other platform limitation, maximal duration time for Lambda is currently 15 minutes. It was tripled one and a half year ago. Another thing is the API gateway. If you call Lambda behind the API gateway, API gateway has the timeout maximal value of 29 seconds. So if your Lambda takes more than this, then you will receive the timeout. So maybe the issue, but probably if you use microservices, you want to respond quickly, even then 29 seconds, probably three seconds, also too much. So you have to think asynchronously in the cases if you have some Lambda which executes more than half a minute. Also, you can only assign three gigabytes of memory. It was also doubled one and a half year ago. I think it's OK. Probably you cannot do some image transformation with Java with three gigabytes of memory. So there are some limitations you have to think about. And it probably will also be increased sometimes later. Other limits, so much concurrent indication, you have the account-wide limit how many Lambda's can run in parallel. And it depends on the region between 500 and 3,000 parallel executions. This is the soft limit. You can increase this limit if you request this. The problem with this is it doesn't scale. If you want 10,000 instead of 3,000, it won't scale automatically. So I think 500 additional parallel execution per minute after this soft minute is currently the way to go. But it's still possible to increase. Also, there is connection limits. So if you use Lambda with a relational database, the situation is you may run out of connections within your relational database. If you have too many Lambda's running in parallel, parallel accessing the same database, then probably as you know, the number of connections for the database is the factor of the size of your database. So if you have five large databases, you will have the amount of connection x. If you have x large, you do the double of these and so on. So you may run into these issues. And then you have to think, probably I have to use data API to avoid managing connection on my own, or even no SQL database, so like DynamoDB and so on. They are really much better fit for the serverless world as a relational databases. But people still stick with their relational database because they are familiar with SQL. And so there are also reasons for this. Another area is also cost at scale. You also have to understand your bill. And if you see the bill and you see the Lambda costs are really only small fraction, this is the screenshot of one of our serverless applications. You see the total cost of $30 and Lambda costs only $0.25. So we'll go for other services, but Lambda is a compute. It's only a small fraction of this. And there are other things which add to the costs, like API gateway, it costs you $3.5 per million API costs. It sounds like that's not too much a million costs, but if you have many services which experience million costs in one hour, it can become pricey. That's why AWS recently some kind of alternative for API gateway. It's called HTTP APIs. It's still in beta, but I think it will go general and available. It will become general and available. And it's 70% cheaper as API gateway. Currently, it has fewer configuration options, but for the most use cases, it's good enough. And I think it's even faster as API gateway. So it's probably the way to go for the future. But currently, there is no feature parity to API gateway, which was released, I think, five years ago. But it will also enable some other cheaper use cases so that people rely heavily on API APIs and API gateway. Also, you have to think about DynamoDB options. The default option was provision. Then you have to think about your reads and writes and so on, how I can provision them, how I can scale them. And DynamoDB didn't feel like serverless because you have to manage all those things. And they added the option on demand scaling, then they will scale for you. But of course, it also adds up your cost. So if you have high utilization, you will pay the 7x amount on demand scaling. But also think of your application. Does it run at high scale all the time? Probably not. So it's probably a good decision to go with provision concurrency. Probably pay a bit more for this, but don't worry about how to provision capacity anyway. Also, there are another thing which adds up to the bill. Logging costs can be really huge if you save your logs and don't delete them because you pay not only for logs and the flight, but also for the storage. Also monitoring costs. CloudWatch is really expensive service. So also think how many metrics do we want to track and so on because the price can depend also on this. There are also data transfer costs if you send your data out. And it's really, really complex thing to understand all this between availability costs, also between region costs. Really, really very complex thing. X-ray for observability adds up to the cost. One of the most expensive courses are where they function because you pay, I think, $25 for million transitions. And it's the service for your orchestration and so on. Also, if you enable caching in the API gateway and also DynamoDB cache, you have then services which run all the time. It won't be serverless anymore because you're relying on elastic search and other things behind. It adds up to the bill. Also, you have remote API calls. So the number is sitting and waiting for response. And if you don't have time out, it's, of course, also one of the areas. And you also have third party services which you use for observability. So on there, you see that the CloudWatch is not enough for this. We will also talk about this. They are really pricey services, but they add also value for you. So another topic, the next one, organizational knowledge, and it's probably one of the most sensitive topics in the serverless world because you have to ask yourself, do we have, already have DevOps knowledge in the organization? And if it's all how it fits together, and it's really if you have classic system administrators, and then probably they won't be happy if you go serverless because they ask, what can I do in this world? And I really love them. Can, yeah, it's a pretty good talk from Tom McLaughlin. What do we do when the server goes away? He explains where the classic administrators working in the team, not the coupled, in the some kind of development, agile teams, cram team, where can take the charge. And there is a lot of also there are a lot of challenges in the serverless world. Like when you're talking allotting for the whole application, and probably not every application is 100% serverless. Also, chaos engineering becomes more important in organizing game days is one of the things. So to see if your system is full tolerance or how you react on the fear less from the cloud provider and so on, this is the area there. System administrator can shine and help. Also infrastructure as code, automatic, everything, and testing of infrastructure as a code. Because the provider released new features, the rename thing, so infrastructure as a code can become broken very soon. And you have to test also this thing. And another huge area there that can contribute is help understand constraints of LWS services and choose the right one. Because each service has a lot of constraints. And this is only one example for event source. What do you choose? Kinesis stream, SNS, SQS, even event breach, or even combination of those services. And they all have their usages, use cases, and even constraints. And I think that system administrator, they understand all those constraints even better as developers. They are very sensible, they're sensitive to them. So they can really consult and help understand what's the right use case for the right scaling. For now, they may experience issues and they probably have to evolve our architecture and so on. So they also have different pricing models and so on. This is only one example of event sources. So I think that's a really good thing that system administrator can help. And this is a really huge area. So another thing is, of course, are developers willing to launch new languages if you have people which only program Java or C sharp? As I told you, there are things which already improved for those languages. But generally, you prefer something like Node.js. Probably you have Go people. But you have to have some kind of organization, culture, of all the developers. The developers also want to learn the things, want to take another responsibilities also in the operational area working tightly with system administrator. So there's all this kind of political things and cultural things you have to have in the consideration. We have 16s at IP labs, approximately 30 developers. And this currently, the second team which develops completely serverless. This team was founded three months ago with system administrators, previous system administrators there. And currently, they have found their role. And I think with the right culture, with the right motivation, so you can also convince system administrator to go to the scrum teams, developer teams that contribute there. So probably the last thing that really deserves the whole talk, platform maturity and tooling. And in LWS works, you have infrastructure as code solution. You have cloud formation using they have CDK and Terraform. Terraform is probably not the best thing for serverless. But also you have, you have to understand what you choose for infrastructure as a code. Also you have different development environments or even frameworks like LWS, same, amplify with strengths and weaknesses. And also third party serverless framework. So you also have to understand what tool do I choose. If you have only several teams, it's probably better than the other teams choose the same tool. So you have some kind of organization and shared knowledge not that every team tries different things, maybe also working for you. And also you have to ask yourself, CI, CD, all this code commit like Git as a service. Not many people use this because it doesn't have feature parity to beat bucket to GitHub and so on. So ask yourself, do you want to try this out or do you need more major solution, logit, log aggregation, the same thing, CloudWatch is perfect for serverless application. But if you have EC2 instances, then because of one minute resolution of the metrics, it may be not the best fit. And you have to ask yourself if you have the application with partial serverless and partial not serverless, what's the best strategy. Monitoring, tracing, alerting is the huge area that CloudWatch offers only basic services. And you have tools like Lumigo, like IOPype, like Epsagon, which provide observability as a service with really good tracing and alerting, but they're really, really pricey. But I see that the CloudWatch has been improved since last year, so also AWS releases features there. And also the security and so on. So ask yourself, if serverless security is the huge new area because there are events, it's not HTTP traffic and there are a lot of different areas how to defend all those applications. Also, now you have integration with the third party services, AWS Release Event Bridge, which allows you to connect to the services like PagerDuty, Datadocs, Zendesk, and so on without using of web hooks. So also a new area, how you can integrate serverless services with the third party services outside of LWS world. And also another area is local testing and debugging. It's really difficult to do locally with serverless and because I am permissions and so on, they are really difficult to replicate. Locally, you can do some things with DynamoDB, with Lambda API Gateway, there are Docker images for doing this, but probably you can also tools and also software as a service companies which like Stacrary, they do this, but generally speaking, you will try to work local testing with LWS, you can have developers or even teams can have multiple staging accounts and test accounts and do things there or even do things in production. So this probably is a kind of decision check. This is you see a lot of things to ask yourself, but everything is a trade off. So there is no right and wrong. So that's why you have to be aware of many different things. So some words about the future of serverless, where it might be going. And I really like this quote from Simon Wordly, who is really known in the serverless community, where he was asked, is serverless is really a niche? And he asked, yeah, that might be true if the niche is called the future. So he says, serverless is the future. And Simon is also known from the Wordly map, probably don't have too much time to explain this, but he says that everything evolves from genesis on the left side to commodity on the right side. And now lambda like execution environment as a service is evolved and it's in the commodity state. So everybody can use this. But what he also tells us that in order to use some new paradigm and serverless is a new paradigm, you have also to consider so-called core evolution of praxis. So you can do new things with the old approaches. Like you won't probably program no SQL no SQL database with the knowledge of relational databases. It simply fails. So what are co-evolutionary praxis in the serverless world? As already mentioned, you will have true DevOps. There is a link below to the DevOps topologies where they explain their right and wrong DevOps topologies. So failure that people do and probably the right topology in the serverless world is that DevOps are really interchangeable. They are working together. They are sharing their responsibilities. OK, now it's frozen again. So I probably have to go out. Now it's gone. So you also have FinDef or the screen is not shared. I'm checking. Oh, the screen is not shared. We can see. You can see. So OK, FinDef responsibilities of financial responsibilities all also go to the team because they can see that the invoice, they can improve this. They know the pricing of the services and so on. So they can set up billing alarms and so on. This is some kind of shift. It's not very easy. Currently for the devs, they have to be set and ops and also financial responsibilities and also security, this kind of cross-functional collaboration. It sounds easy, but it's really, really difficult. How many things do the developers have to know, have to learn? You have also completely reliant infrastructure automations or everything should be automated. Otherwise, you'll probably lose the track. Chaos engineering is also called for evolution of practices. It's really must have in the serverless, probably, of all. But also in the microservices world. Each developer may have its own LWS stage and can count as all the dimensions. It's simply that cheap with serverless because you only pay as you go for the serverless, you don't use them too much. You only pay for the storage, but you don't have too much storage capacity in your test environment. So every team can get staging and testing it. Every developer, if everything is automated, can set up a new environment very quickly. It's probably a matter of 10 minutes, maximum depending on the number of resources and so on. Also, as already mentioned, no local testing environment because you don't have all this restriction that you have how much puts and gets on three buckets and so on. How many concurrent invocations? You don't have all this thing locally. So you can't run all the tests locally. It doesn't make any sense really. So you have to work in the life environment. And also testing in production becomes really important because sometimes you can't use staging for tests because you don't have enough traffic, you don't have enough storage. So you can't rely on the test on the staging. And you have to test in production. They have to think carefully how to do this with teacher toggles and so on. There are a lot of strategies how to do this, which are beyond the scope. Also type integrations, the third party I mentioned with event breach, which we can integrate with pager duty, even Shopify. Currently, so there are a lot of custom integrations which become available each month in LWS. So the last year, the Berkeley view on the Sorreless computing was released. And also interesting read. And they say we predict that the Sorreless use will skyrocket. So it's the same thing which Simon Wardley says. But also they see challenges. And these challenges we have already discussed, the limitation of the Sorreless and probably the challenges are inherited from them. So provide low latency in high IOPS ephemeral storage. It's currently not possible to do also one thing which was mentioned, provide Sorreless durable storage, which was solved, which has been solved with the possibility to attach elastic file system to the Lambda. So one thing is also is currently solved. This networking performance has to be solved that Lambda, which communicates Lambda can communicate quickly. And there are also challenges with some security issues. So you can define fine grained security for your lambdas. But sometimes you want to define some security policies and then infer it them for other functions as well as for Lambda. And it's currently not possible. You have to write your own each time your own permissions. And so on. So there are things which have to be considered, but general security is much better in the Sorreless world than outside. And also accommodate cost performance or what we will see that the hardware will be released, which really tuned for your execution environment. For example, that's the hardware for your Java environment and also for Python environment. And so on, they will see the things which they probably have cost reduction from the hardware providers, which will be passed to us. So it becomes even cheaper. But there are also two improvement areas. From my personal perspective, only to mention elastic file system, yeah, it's currently available for the Lambda, but it's not the match for S3. So S3 is natively integrated. And you can fire events if the new file is created on S3 or even updated so you can call Lambda. It's not possible with elastic file system. Sometimes you need those use cases. And also all compliance services like LWS config and so on, which are tightly integrated with S3, they are not integrated with elastic file system. And because of compliance, they will have to be aware of what's happening with the regular requirements. Cloud watch improvements, a lot of improvements have been done. Currently, you have the language where you can search in the cloud watch. You can even you have now embedded metrics that you can send your metrics asynchronously with specific format cloud watch. But in the area of observability and alarms, software as a service tool are really, yeah, they offer much more as cloud watch. It's not clear for me if the cloud watch will try to close this gap because all these companies, Lumiga, Epsilon, they are working closely with LWS. So sometimes I see that LWS releases the services with the limited feature number that don't improve for having the room for all this software as a service provider to offer something. So that's something which I can predict. But we also use PagerDuty for the alarms because we can do more and receive alarms for the whole system, for the serverless part, not serverless part, even for not cloud part of our solution. So we also use some of these tools because they provide value for us. Also, elastic search, many people rely on them and it's currently not serverless because we are managing family instances, the number of instances we're also managing the storage. We have to increase the storage if we run out of storage. So it's really important service if you don't want only to rely on cloud watch for elastic search, it's much powerful for searching. And it should become also more serverless. Other things, as I already told you, code commit, it's not nearly comparable to GitHub and Bitbucket but has nice integrations with Lambda. So you can fire up events with code commit that's Git as a service, managed service, Git as managed service. And you have the whole family of code commit, code deploy, code pipeline, and so on. But code commit is currently really limited. That's why many people use GitHub, Bitbucket, they have plugins to do code deploy, for example, because if you deploy into your cloud environment, you deploy cloud formation in the end and you need something like code deploy to be able to do this. Also xray support for observability, it was released but it didn't support SQS, SNS now, it's supporting the last thing that they support was Epsync. But EventBridge as a new service, but it was also released one year ago, it's still not supported. And that's what is needed because you would like to have the observability for all your serverless services. One is missing, you have some kind of broken pipeline, you don't see what's happening. Another thing is, which I see in last time, the cloud formation is decay as new services that don't support new feature from day one. So EventBridge will be released but it doesn't have support for cloud formation for several months. So you can automate everything there. So you have to wait and so you can only prototype things. It's probably okay for the first two months, but for me it's the question, so the cloud formation should be supported from the day one. So these are the other improvement areas, especially in LWS. So yeah, to wrap up, there are things you have to ask yourself in order to decide to go serverless or not, like application lifecycle, understand your workloads, platform limitations, also understand your cost at scale, what services do you use, what they add up to your bill, organizational knowledge, what is your culture, how you can transform it, and so how people react and how you can embrace this, and also the platform and tooling maturity in order to understand what's your tool to go and you have options probably with each cloud provider. I think that's it. That's also, you can contact me. You see, you can find me also on LinkedIn, on Twitter. I don't have my personal page, so if you have questions, don't hesitate to write me. Yes, and probably we can now go to the questions. We have, thank you very much for listening. I don't currently know how many people there in the room, but I'm finished and I'm happy to answer your questions. All right, thank you very much. So do we have questions? If so, please drop them in the chat. Okay. Maybe have you seen how many people attended the talk? About 20 in the stream and three more in the room here. Okay. All right, it seems that there are no questions. Okay, and then thank you very much. I will be happy to answer them by email or by other platforms. I will send the presentation to the organizer so you can read this also later.
I’ll draw a complete picture of the total cost of ownership in serverless applications (including visible and invisible costs) and present a decision-making list for determining if and whether to rely on serverless paradigm in your project. I’ll also discuss current challenges adopting serverless When we talk about prices, we often only talk about Lambda costs. In our applications, however, we rarely use only Lambda. Usually we have other building blocks like API Gateway, data sources like SNS, SQS or Kinesis. We also store our data either in S3 or in serverless databases like DynamoDB or recently in Aurora Serverless. All of these AWS services have their own pricing models to look out for. In this talk, we will draw a complete picture of the total cost of ownership in serverless applications and present a decision-making list for determining if and whether to rely on serverless paradigm in your project. In doing so, we look at the cost aspects as well as other aspects such as understanding application lifecycle, software architecture, platform limitations, organizational knowledge and plattform and tooling maturity. We will also discuss current challenges adopting serverless such as lack of high latency ephemeral storage, lack of durable storage, unsufficient network performance and missing security features.
10.5446/51763 (DOI)
Thank you for joining our talk and a very warm welcome. It's fair to say that remote working has taken the world by storm in the last year. And I think all organizations are at different phases of adapting to it, right? And we've come to find that one of the most challenging aspects of remote working is the human aspects and specifically the human interaction parts. And so today we're going to use neuroscience to help us navigate some of these challenges and share some possible solutions with you. But first let's introduce ourselves. So and you can go to the next slide. So hello, my name is Kirsten and the other lady you can see on the screen is Jay. We're both based in Cape Town, South Africa and we're working at a company called IO. Some of you may know IO as the company behind Adblock Plus, but our bigger vision and mission is actually to create a sustainable and fair online ecosystem by building, monetizing and distributing ad blocking technology. So Jay and I are working as remote agile coaches and we've both been really passionate about remote facilitation for the last couple of years and we're the authors of the remote facilitators pocket guide. So we're looking forward to sharing some of our thoughts with you today and we're going to continue now. So some of you might be sitting there wondering why we are talking about meetings at an open source conference. I think when we think about open source communities, at least a lot of what we think about is the power of how organically they operate and how asynchronous they are. And so where does meetings kind of fit into that picture? And for us, we think it's important to talk about meetings because of open source organizations. And so for example, in our context, we work at a company that is open source and even though we were very remote friendly before Corona hit, we have now had to go fully remote and it's been a bit of a change for everyone. And so the importance of optimizing these meeting spaces or the way people come together online really, really for us hits home when you think about organizations. And so that's going to be our kind of target audience for today's talk. If you're working for an open source company and looking for ways to bring your people together in more effective human ways, we're hoping that you'll leave with some new ideas and maybe some inspiration. Okay. So what can you expect from this talk today? So we first going to look at the brain and just talk a little bit about how that relates to remote meetings. We also going to give a very brief introduction to facilitation. What's the importance of meeting in meetings? Where next we'll move on to our first principle, which is called creating connection. And we'll speak here about how we can connect people, connect in remote spaces. We'll not only think about the principle, but we'll also share practical methods that you can take away and use. Give you some silent time to also explore a little bit here about created formats that we've made. And then next following a similar format is our second principle, which is called flow. How can you create flow within your meetings? How does it feel like to get stuck? And then again, some practical tips that you can take away. So this is kind of what you can expect from our talk today. But first, I just wanted to find out if any of these faces feel familiar to you. You've called a meeting to solve a problem or get into a discussion for a particular topic hoping to get everyone's engagement. So you dial in a few minutes early and now you're just waiting one by one for people to join the call. Everyone starts on mute and it's starting to feel a little awkward with these faces just sitting back at you. So you say, welcome and you ask how everyone's doing, but no one replies. And all you have is the sleepy cat looking back at you or really totally bored by this whole affair. You might have the angry squirrel that's wondering why you invited them to this meeting in the first. Or you might have this shy seal staring back at you with nervous look in their eyes hoping that you don't ask them a question, hoping that they don't have to speak. And it just feels like no one wants to share anything. It's just beginning to feel super awkward. But we want to share that meetings really do matter. If you think about what an organization is built on, it's built on tons of tiny interactions and meetings are just one part of those interactions. If the quality of the meeting is low, it can impact the whole organization. It impacts not only the most time, but also the outcomes that you're there to achieve. So you have to find ways to achieve healthy meeting outcomes. Especially in these crazy times of change, it feels like remote collaboration is especially hard. And a few years ago, we could have said that remote is considered cutting edge, but it really isn't anymore. We're getting into a space where it's becoming quite the norm. So meetings matter and remote collaboration is hard. So we all probably know what it feels like to be on a remote call and see some of the benefits. What we don't often speak about is some of the challenges and how hard it can be to hold the space for effective remote meetings. One of the most important challenges of remote work is the collaboration and the human aspect of it. So we want to show you and do about it. But before we get into how the brain works, we just want to quickly touch on facilitation. And we won't be able to go into this in great detail, but we just want to share three things to think about facilitation. We see it as both a role and a skill. So the first is facilitation is about making it easy. It's about making it easy for people to collaborate and making it easy for people to reach outcomes. The second is about neutrality. So your role is not to focus on the content, but to steer the process. You're really there to help people get to great outcomes. And lastly, focusing on the quality. So focusing on the quality of the interactions and the outcomes. If you can remember just these three things, it can really help your meeting. But now we will look at the brain and see how that can affect the quality of the remote meetings. Okay, so now we are going to start talking about the brain and how a basic understanding of the brain can really strengthen both your facilitation as well as then the subsequent outcomes that your meetings will achieve. And if we think about the brain, one simplified way of understanding it is that in many ways it's just this big threat reward detection mechanism. So for those of us who joined our workshop yesterday, we shared the same image. But the image here really is that in every given situation, your brain first off is trying to do one of two things. It's trying to determine, am I safe here? Or am I possibly in danger? Because if I'm in danger, I need to make a quick decision. I need to either get to safety or I need to do something quickly. Whereas if I'm in a safe space or there's a possible reward, then I can behave in a completely different way. And what's interesting is that a lot of research has shown that the brain is much more likely to detect threat than reward. And what this simply means is that as human beings, we're much more likely to see something in a negative light than a positive light. So if you think about a practical example, if people walk past you and maybe someone looks at you at the side of their eyes, it's often the case that you will interpret that negatively. Like maybe they were judging me or maybe something was off there. And it could have been for a million reasons that they were looking at you that way. But humans are kind of primed to detect threat in a situation because that's what keeps us safe. As an evolved species, the ability to detect threat and respond quickly is what kept us safe. Being able to determine that there was some chocolate cake or whatever around the corner doesn't have any evolutionary significance. So the first thing to bear in mind is that humans are very, very likely to detect threat in a situation. And the second thing that's interesting, as you can see in the size of the arrows, is that the threat response in the brain is felt much more strongly and it lasts much longer than the positive reward states. So when people are feeling threatened, the chemicals that are released stay around for much longer, which is why if you felt stressed or you had a fright, you kind of feel that sense of anxiety for much longer than when you're in a positive state, that can change really quickly. And now if we go to the next slide, going into this in a little bit more detail, you can see two regions of the brain there. And these are really the primary regions that are activated either if you feel threatened or if you feel safe. So the prefrontal cortex is that part of the brain, which is responsible for rational, complex thinking. This is really what we want people to be able to access in meetings. We want them to be able to do this rich, high quality, creative thinking. However, the limbic system is what is activated when people feel stressed. It's that fight, flight or freeze response. And when people start to feel stressed for the smallest reasons, what happens is resources start getting diverted away from the prefrontal cortex and they start going to the place where people are going to make quick decisions. And bear in mind, this is all happening unconsciously, right? We're not sitting there going, okay, need to make quick decisions, do it. It's happening invisibly. Maybe someone says something rude in a meeting under current word and you begin to feel threatened and slowly you begin to edge slowly, not quickly, slowly towards this more threatened state. And now if we go one step further and you can click next, what's interesting here is that conditions impact cognition quality. So if we're thinking about things as facilitators, it's really important for us to bear in mind that the conditions surrounding a meeting and within a meeting are going to impact the quality of thinking that people are able to do in that meeting. And it's going to be totally unconscious for people. They're not necessarily going to be able to articulate why they feel either threatened or safe, but they are definitely going to be impacted. The quality of thinking that they're able to achieve will be impacted. And the next thing is that quality thinking is going to be essential to achieve quality outcomes. So as facilitators, we really need to keep this in mind. If we are after quality outcomes, we need to help unlock quality thinking. And to do that, we need to be paying attention to the conditions that are around us in these meetings. So that's kind of a foundation understanding of the brain as we begin to think about the kinds of thinking that we are hoping for in meetings and the kinds of states that can be present around it. We can go to the next slide now. Okay, so just going back to the agenda, we briefly covered some content about the brain. And we'll cover that a little bit more during each of the principles, during connection as well as flow. For now, we're going to move on to our first principle, which is about connection. How can we help connect people into more spaces? But first, we just like to do a little interactive questionnaire, which is which picture best matches your experience in the remote meetings. And you'll find this if you go to cd.com and either type in their code or use the QR code. So you can do this either with your phone or if you're on a laptop at the moment, you can do it there. So we just ask you one question, which picture best matches your experience in remote meetings. So I'll leave this code on the screen for a minute or so, and then I'll show you what the results look like. I also popped it in the chat if anyone would like to click on it directly. Okey dokey, so we'll have a look if there are any results. So far we have a clear winner, we'll just do it a couple of more seconds. Oh, it seems like Ron is almost winning. So not that many great experiences, oh, we have a tie, this is getting interesting. Okay. So thank you for participating. So far it looks like Sleepy is winning. Hopefully at the end of this talk we would have given you some tips about how to make it more engaging. So thank you for participating in our quiz. So we're just going to go back to our presentation. And so we just wanted to share some scenarios with you. We've seen some of these examples in remote meetings, but maybe you can share some of these yourself. It could be that you join a call and you speak a completely different home language to everyone else. And so you don't understand any of the inside jokes or you don't understand some of the context or they use words that are just way too fast for you to even grasp. That gives you a feeling of being left out and not part of the group. There may be a situation where your connection keeps stopping, but nobody seems to notice. And they just continue with the call without you and you desperately kind of connect back or even grasp what was said in the last couple of minutes while you weren't there. That also can make you feel quite late in the conversation. Or there's a situation where one or two people are dominating the call and they're speaking super fast. And they're speaking super fast dominating and you're trying to say something, but when you do say something, nobody acknowledges you. Some of these experiences might feel familiar and experience are not that uncommon in remote meetings. So now let's look at what happens. So what might be going on for people in these scenarios? And we're going to go a little bit deeper into some of the neuroscience stuff. And before we do, I wanted to ask if anyone could relate to stepping on one of these guys. So I don't have children, but I do have dogs and there are often toys scattered around. And once in a while, I step on one. And whilst it's not Lego, the shooting pain that goes through your foot and the subsequent rage that follows is maybe something that you can relate to. I think that acute feeling of being angry because you're in pain is probably relatable. And also, if we think about meetings, it's definitely not going to help us to do high quality thinking, right? So what relevance does a Lego block have to remote meetings? Now if we go to the next slide, somebody called us. I did a study, which I just really like the name of. It's called broken hearts and broken bones. And she was busy looking at what is the relationship between social pain, so feeling rejected or not part of the group, to physical pain. And to study this, the way she went about it was she hooked people up to machines and they were asked to play an online game called Cyberball. And in this game, the participant could see three dots on the screen and they were told that they were one dot and the other dot were two other participants sitting in another room. Now what they didn't know was that those other two participants were actually part of the study too. And so in the first round of the study, the three little dots passed the ball. So that little black dot is a ball and the participants happily passing and all three of the dots are passing the ball nicely to each other. Then they enter into a second round, unbeknown to the participant. And now the other two dots only are passing the ball to each other. So they are excluding the participant who is now the red dot. And what's really interesting is what happened when people started to feel excluded in something super simple, just passing a ball amongst people you've never met on screen. If you go to the next slide, you'll see what was the outcomes. So the first thing they found which was interesting is that the brain registers social pain in the same regions as physical pain. So physical pain activates the brain in two ways. The first way is this. So the application of the pain. And my shoulder is in the top part, the bottom part. Your brain needs to know where the pain is happening. And the second part is the affective part. So that's the, you can go back one bullet. So that's the part of the brain which is now understanding the emotional aspect. So this feels horrible or I'm not in a good space. We need to change the situation because I'm in pain. And what's interesting is that social pain activated the same region of the brain as physical pain. And the next one, which is even more interesting, is that that part of the brain is responsive not only to experiences of rejection, but to cues that represent social rejection. And so what that means is that the part of the brain responsible for physical pain is triggered if you feel an actual experience of rejection. So maybe a group comes up to you and says, you know what, we no longer want you to be one of our friends. That is very clear cut. It's an experience of social rejection. But that second part is that cues also can trigger it. So it doesn't have to actually be an experience of rejection. Maybe it's just something you've come to correlate. So maybe somebody on a meeting rolls their eyes when you say something. Maybe for you, you interpret that as a cue of rejection. This person feels a certain way about me. Whereas an actual fact, maybe they just got something in their eye and they're trying to get it out of their eye. But because for you, it's a cue of social rejection, the same parts of your brain that get triggered in physical pain are now likely to be triggered because you're interpreting it as social rejection. And if we go to the next slide, when we start thinking about the consequences and what this means for us, if we think back to those stories where Jay was talking either about someone maybe not being able to speak the same language and feeling a little bit left out or maybe their connections dropping and it feels like no one's actually pausing to look after them. What's interesting is that this study found that we're able to engage in poor quality thinking when we're in pain. And that kind of makes sense, right? Rationally, the Lego block example, when you're feeling pain, you're not going to be doing high quality thinking. And similarly, when people are feeling any degree of social pain, they're less likely to achieve high quality thinking. The second interesting consequence is that people in pain are more likely to engage in aggressive behaviors. And this is interesting because people are ultimately in those moments responding to protect themselves. So rather than doing the thing that would reintegrate them into the group, so if someone rolls their eyes, rather than doing the behavior which might reconnect them, for example, saying, hey, I feel like maybe you missed me there or is it okay? Can you hear me? They're more likely to do something defensive and respond in a protective way, which then can create a loop where it exacerbates the behavior because now they start checking out and then you can actually start seeing weird dynamics happening. And the third part is that over time, this can lead to maladaptive beliefs. And what that means is that the more times which you dial onto a remote meeting like this and either you have a negative experience or maybe you perceive someone in a certain way, over time that can lead to beliefs, either about meetings or individuals. And if you think about organizations, we really don't want people to be developing these kinds of beliefs either about collaboration being hard and not worth our time or about other individuals because then they're not going to be able to collaborate with those people. And so to kind of wrap this piece up before Jay goes a little deeper into the principle, if we think about the brain and this kind of narrative that the conditions around us impact the quality of our thinking, the ability to connect with people and feel socially accepted actually plays a really big role in people feeling safe enough to begin to do high quality thinking. And accidentally we often trigger the opposite response in people without even meaning to, especially remote meetings. Okay, so if we speak about nature connection, like what do we also start? If we can connect with someone, we are far more likely to be forgiving and compassionate. And people will then in turn far more likely feel safe being themselves. It can be quite incredible to see how different we behave when we understand someone, when we understand the context. So remote working can be quite challenging when it comes to creating connection and understanding. If you're in person, for example, it's almost accidental often that you create connections with other people. It can be as simple as someone wearing shoes that you have and you go, oh, I really like your shoes, I have the same pair. And you start triggering a conversation from there and forming some sort of relationship. In remote spaces though, it needs to be a lot more intentional. It requires a lot more effort. And so if meeting other people being at the peak performance and the most authentic self, how do we create the conditions that can foster these kinds of connections in meetings? What are some simple things that we can do? And so now we'll just share with you some of our practical methods that we thought about. And these are by no means like all of the ways that you can create connections. These are just some of our ideas. And so the first one is open the space and open the space in the check-in. But also paying attention to those first few moments of a meeting and how you open that meeting. So for example, if you start the meeting, getting set into the detail with a little bit of aggression or not really introducing the space, it sets a completely different tone to when you're opening the meeting with some sort of agenda or sharing how you will engage in this what people can expect. And sometimes we say open the space of the check-in because sometimes checking questions can be quite useful. And we don't mean that it has to be super frivolous or like what color are you feeling right now? It can be something super meaningful. So like how comfortable are you that we'll meet our deadline? And that kind of checking question can help bring people into the space and then also give you some data for the rest of the meeting. The six are working in smaller groups. A lot of times people try and solve problems in one big group. But it can be useful to break your group down into smaller pieces. So another name for this is Mermer Groups. It comes from when you're in a room and you break people into smaller little groups and there's this Merber going around the space. It gives people a little bit more safety in terms of speaking up and sharing the opinions. It allows everyone to have a chance to speak, which is less likely to have if you're in one big group. We know that not all tools cater to this functionality, but there are other ways you can do it by creating different calls, for example, and just preparing beforehand how people can join and join in smaller spaces. The third is just paying attention to the space. So this might not feel super practical, but observation is one of your biggest strengths in a remote call. And this can just be like noticing what's happening in the space. So you can only see this much of me, but maybe you observe someone's dropping their connection or someone is trying to speak or not able to. Maybe it's even as something simple as seeing the icon of mute and unmute on someone's screen, indicating that they're trying to speak, but they're not able to engage. Just paying attention to the space and seeing if you can notice as much as possible so you can take care of that remote meeting. And the last one is just attention to different contexts. So when you're working with lots of students, it can be super easy to forget when someone is skipping lunch because the meeting is booked in the middle of their lunch lot or someone is missing dinner with their family because it's cropping over their lunch lot. And it could even mean a challenge of different languages and not acknowledging that someone is speaking too fast or going at a completely different place for someone able to follow. So just making sure that you're bringing attention to those different contexts. And maybe it's just showing all of the variety of time zones that are on the call to acknowledge what is going on for people. Or maybe it's providing a writing mechanism to say this will be easier for us to read so that we don't have to follow the queue with verbal queues. Just providing different ways to share contexts is useful. So now we'll just give you some time for you to explore. Yeah. So we wanted to pause here and I think because this is a lecture not a workshop, the chat might be locked. So we'll share the link in the chat, but Jay is also going to slowly scroll through that deck and we'll share the slides afterwards so you can access it. But what we really wanted to do here was pause because we've shared quite a bit of content and we wanted to allow some silence for you to reflect on what's coming up as well as look at what some of these practical ideas might look like in real life. So I'm sharing it in the chat. I'm not sure if everyone can see the chat. If you can't, Jay is going to go through these slowly and we're not going to talk. We're just going to allow you some time to process or notice these things. I think people are joining so I think they can get a little bit of a call anyway. Cool. And just one quick note, we use Google Slides just because it's free. Most people can access it and there's quite a low barrier to entry. So there's lots of cool whiteboard tools out there as well, but this one's pretty easy to access for now. So thank you. The chat should be public for all users. Cool. I'm going to keep quiet now. So thanks Jay and Kirsten. A wonderful talk. If anybody has questions. Are we supposed to be finished already? I think. No, you have time left. Yes, sorry. We're halfway. We're just pausing here to allow people a chance to look at these slides. Is that okay? Okay. We will be finished at 3.30. We were aiming for an hour. Thank you. Cool. Okay, and as we look at that, we're going to transition back to the main deck, but we will share our main slide deck with you afterwards as well, and you're welcome to reuse these, and those are just some practical ways that you can nurture connection in remote spaces, which then brings us back to our agenda. Just to recap where we've been so far, so we started speaking about the brain and the kinds of conditions that we would like to activate in remote meetings so that people can do high quality thinking, and then we spent quite a bit of time thinking about connection, both what happens when we don't feel connected to people as well as how as facilitators we can help connect people. And now the last bit of our talk, we're going to speak about flow and how we can create that ease of movement through a meeting, because often remote meetings feel jerky and stuck, so what can we do about flow in remote spaces? Okay, so once again, we have some scenarios for you that we've seen happening in remote meetings, these are things that you might have also noticed. This being, which we mentioned a little bit earlier, you join and someone jumps straight into the detail, jumping straight into the content, not even taking a minute to set the tone, figure out the agenda, or set the tone for the meeting, jumping just straight in. The second we've seen is how fatiguing meetings or video calls can be. And so much of it is because we are unable to see body language, and so people kind of get tired, not sure when they can speak, constantly interrupting each other, maybe it's because of lag, maybe it's just because of connection, so that constant introduction can be quite fatiguing. And the third one is just holding so much content in your mind. So the brain can only hold so many pieces of information at one time, but sometimes when there's a reliance on verbal communication, there's so much information floating around, people can't actually hold all that information at one time. So now we'll look at the brain some more to help us unpack some of it. And to get us started, I'd like you to think about crossword puzzles. So growing up, my mom loves to do crossword puzzles, and I hated them because I could never get them. But every once in a while, I would find one of those words. And if you've ever done a crossword puzzle, you can probably relate to that little zip of energy you get when you put in the letters, and they match, and it works, and you just know you've closed a little loop. Because ultimately, solving crossword puzzles feels good, or any of those things, maybe it's more familiar to you, that aha moment. When you come to an aha, it feels good. So what is going on for people when we have these kinds of aha moments? Because if we think to those scenarios that Joe, that they just shared, those are the opposite of aha moments. Remote meetings can be frustrating because we get stuck, because people are trying to solve something. They're going off on a direction together, and they've opened this loop, and they're not getting to the point where they can close that loop. Whether it's for technical issues, or because maybe there's too much going on, and they're getting distracted, or they can't see the other people, we're trying to solve the crossword puzzle, we're trying to come to something together, and remote meetings present all these reasons that stop us from getting to the aha moment. So if we go to the next slide, there was a study done in 2018. Looking at the neural correlates of aha moments. And the way that they studied it was they hooked people up to machines, and they presented them with three words. And so you can do it for yourself now. If you look at these three words, house, bark, and apple, what participants were asked, they were asked to find the fourth word that links all three of those words. So for a moment I'll just pause. If you can think for yourself, can you find the fourth word that connects to all three of those words? Okay, well, do I feel bad if you couldn't? Because I couldn't either. But once in a while, some of them you can. So the word that connects all three of these is tree. Because word, building a house, bark, and apple, tree. And so they looked at what happened for people when at the moment that they came to that aha moment, at the moment that this loop that they had opened closed. And if you go to the next slide, you will see the consequences of what happened. So the first thing is that closing loops activates reward circuitry in the brain. And reward circuitry is the circuitry that feels good. So when you do something and you get rewarded for it, it feels good and you want to do it again. And it's kind of your brain going, hmm, that was really nice. Let's do it again. So when you solve the crossword puzzle word, your brain's going, oh, that felt good. Let's do it again. And we'll think about this a bit more in detail in meetings now. But that's kind of what you want. You want the opposite of the stuck feeling. And you want people to be feeling, hmm, this feels good. I want to keep going. That energy that'll keep propelling them forward. And the next thing that they found also, which is interesting, is not only do people feel more engaged and positive, but they're able to think more creatively. So each time they close a loop or something positive like that happens, they're able to get more creative and do better quality thinking. And so if we think back to remote meetings, this is exactly what we want when people are collaborating. We want to be unlocking better quality thinking for people, even though it's unconscious. We want to create the conditions which enable them to do this. And unfortunately, remote meetings often accidentally result in the opposite, where people aren't able to flow and get to these moments of, aha. And if we now take the next step and we connect this back to remote meetings and the concept of consequences of what's going on in the brain, I can go to the next slide. The first consequence that you might see is people begin to feel disengaged. So when people are trying to close a feedback loop, let's say you've been trying to solve the crossword puzzle, it's now been 10 minutes, you're staring at that row and you just can't get it. You begin to disengage because your brain eventually at some point says, you know what, the effort's no longer worth it. It's too tiring. You start getting the reward that I was seeking and then you start getting distracted. That's maybe when you're sitting in your remote meeting and you start scrolling Twitter. You've just gotten to the point where like, we're going on so many tangents, so many conversations are being spoken about, we're not closing any loops and you kind of just check out. Consequence number two is that you get tired because you've got this background task running and it's consuming brain CPU. So maybe you start finding that this remote meeting is just getting really tired and you're not quite sure why, but you're just feeling tired and it's because you're running so many background tasks. The thing that you started talking about didn't get resolved and then maybe someone dropped off the call and then you have to start again and it begins to create this fatigue. And the third consequence is that you may become frustrated and because this is happening unconsciously, you're not quite aware why, but you begin to become a little bit unnecessarily frustrated. So that little angry squirrel at the beginning, I've definitely seen lots of angry squirrels in remote meetings where people are just a little bit on edge or angry and it gets incorrectly associated to someone. So I may be frustrated at someone else on the call, but it's not actually them that's frustrating me. I'm frustrated because my brain's trying to close these loops in the meeting and it's unable to for all these reasons and then I just begin projecting frustration. And so to kind of wrap this up, if we think about remote meetings and all the possible things that block us from flowing through a meeting, it's very possible that people become disengaged, tired and frustrated and we see this so often. It happens all the time. So what can we do about it? Okay, so what we're speaking about here is enabling flow. The remote meetings are often punctuated with stops and detours and they can be quite frustrating as Kirsten mentioned, but it can be avoided with a little bit of planning and some guidance and some preparation beforehand. But some of those stops and punctuations could be because of connection dropping, as we mentioned, or people constantly even interrupting each other because of lag or because people have simply gotten lost in all the information that's been shared and not knowing where they are in the conversation. Some of this is caused by an absence of behavioural and that caused interruptions. Maybe the group is just struggling to think because they're tired and it's hard to notice that in a call. These are the moments that can make remote meetings feel jerky and people like you're just getting stuck and not able to get to those quality outcomes that you were hoping for. And so a facilitator skill is about judging what's helpful and unhelpful detours that we can have in a meeting and what detours can have a meaningful impact on the outcome. How can you shift and the flow of meetings to help the group arrive at those quality outcomes? And so maybe it's just thinking about off-topic conversations, for example, giving out a way to take that out of the meeting and put it in a parking lot and channel it in a different way. It's just some of the ways thinking about how you can enable flow in your meeting. And Kirsten will share some practical methods for how that's done. Okay, so as we said with the last one, these are just some of our practical methods and there's lots of other ones out there. But the first thing you can do is simply make the agenda or session rules visible. So there's a lock going on in a meeting and people, especially in a remote meeting, they're juggling the technical tool, probably also their calendars popping up. They also maybe are using a collaboration tool. And oftentimes there'll be an instruction given or something will be said and it's missed. So simply by visualizing that, you're creating like a bit of a road sign for people that will help them to navigate through. And it's one less reason for the meeting to get stuck. So if we all know how we're going to proceed in the meeting, we have a clear picture of where we're going to go and there's a visible space where we can refer back to it. It means that we're more likely to flow through the space. We know where the detours ahead are coming. The next technique is about procreating visual documentation. And this is a really big one for us because remote meetings get stuck often for simple things. Sometimes audio breaks up and then you have to repeat the instruction for them. Or as Jay mentioned, maybe people speak different languages at home and while we're all speaking English, we have slightly different comfort levels. And so maybe falling behind or not quite getting it. But if you are co-creating visual documentation, even if it's something simple like a written text document where everyone can type, there's another additional guide to help people through the meeting. So we've often seen someone will drop off on the audio and they'll come back and they can read. And they can catch up by reading. And now the whole group doesn't have to stop on their way to closing the loop. That person can reconnect. Or sometimes maybe someone's actually unable to hear, but they can still see it being typed because your video conferencing often drops off before your documentation tools do. And then the added benefit is for people struggling with the language, it's sometimes easier to read than it is to hear. And so you're bringing people together and keeping them together on this journey of closing loops in the meeting. The third one is one which Jay mentioned briefly, which is creating energy cues and making space for breaks. So quite simply one of the things which can begin to create fatigue or detour people in a meeting is they're getting tired. And so we think it's really important to take off, take frequent breaks in a meeting. So our rule of thumb is like every 45 minutes take a 15 minute break, both because remote meetings are more tiring as well as if you allow people that space to have a break, they'll come back with more energy and be able to close the loops and flow through the meeting more quickly, then kind of slogging it out as you go. And then the last one is routing participants in the present. So this is really just about at regular intervals reminding people where we are in the journey so that we don't get lost in the cyberspace of a remote meeting. Courses or remote meeting is virtual, people often can, you know, you're trying to hold so many things, that picture that Jay showed of the guy with all the charts up, we're all trying to remember so many things that are going on. And if you can just remind people, this is what we've just covered, this is what we're doing now and this is where we're going. It creates that sense of flow and it helps remove some of the barriers and helps people to then close feedback loops and get to that kind of nice positive state that we're really wanting so that we can keep thinking in a high quality way. And what we're going to do now is we're going to do the same thing we did just now, where we're going to share what some of these practical ideas look like in silence and we're going to pause in silence to allow you the space just to think about it for yourself because we've been doing a lot of talking. So Jay is going to share the deck in the chat and she'll also be showing it on her screen and just for a couple of minutes we're going to keep quiet and allow you the space to look through it and after that we will then wrap up. Okay, I think that brings us to the last one there. Thank you. So once again you're welcome to use any of those and that's just some ideas that maybe you can start playing with to bring a little bit more flow into your sessions. And what we're going to do now is we're going to just do a quick recap and wrap up. So our intention today was to detangle some of the complexities of remote interactions using neuroscience. So giving facilitators a little bit of a deeper understanding of what's happening for people at quite an unconscious level so that they can create conditions that are helpful. And so we spoke about the brain and threat and reward as well as what a facilitator can be thinking about creating electricity and focusing on the process. And then we spoke about connection and the importance of people feeling safe and connected because otherwise people can experience social pain which actually registers in a way as physical pain. And we just spoke now about flow and creative connect ideas and we come to solutions and how frustrating it feels when we're unable to do so, for whatever reason it is. And how you as a facilitator if you can do small things that help people to flow through a meeting you can really improve the quality of that space. And that pretty much is a wrap. So we have I think a few minutes left if there are any questions but all that's left for us is to say thank you and our resources, the studies we referenced are on the last slide so if you would like to check any of them out. But thank you so much for your time and yeah, any questions? Yeah, so thank you a second time. Everyone who wants to make a question come into the BBB room just below the stream and I will make your microphones open. Thanks. And Jay's just shared the slides in the chat also if you would like access to them. Thank you for the feedback that's coming through. Any questions? Okay. You can always reach us on social media if you have questions or want to explore things further. We are open to discussing things offline as well. I don't see any questions out there. The slides can also be found in our plan later on. Oh, I see one coming through. Do you have any suggestions on how to reduce meeting durations? I have a few that came to mind and then Jay maybe you do too. Think using asynchronous ways and the build up up to the meeting so if there's parts of the discussion that you can have via text before the meeting that can really help. So I've heard meetings criticized often for only capturing people's first reactions rather than the deeper thinking. So if it's a session or you need to evaluate lots of options often what we do is create whether you use Confluence or Google Docs and invite people to add comments to each other's thoughts there and get some of that initial thinking out so that when you get to the meeting all you have to do is talk through the comments or whatever. One way. Another way is I find also to allow silent time within a meeting. So whether it's time for people to read or write in silence means you can have 10 threads going on rather than one linear thread of conversation. So people are able to get out what they need to say in silence at the same time and then you can discuss it. So those are two. I don't know Jay if there's anything else that came to mind. I would just echo the asynchronous part. I recently then like an interactive meeting but I asked participants to fill out a sheet beforehand so they were able to gauge and read each other's answers before coming into the meeting and then we just had a quick 30 minute discussion about what we saw. So definitely one of the good options of reducing meeting time. The other thing that also came to mind for me there is remote meetings also tend to take a lot longer than in person in that because of the technical issues or you just need to go a little bit slower. So also find it's better to actually just take on less in the meeting rather than trying to do everything. Just take a small piece of the problem and solve that and then you can take the rest of line or have another meeting but a short one as well. So rather a few short ones spread out than long chunky ones where it begins to become unproductive anyway. I hope that helps. Okay. I believe one thing people miss in a remote meeting or in a remote workspace is more of one-on-one talks with their colleagues. There must be some time allotted for that. Just like what a cool canteen kind of talking. Yes. Absolutely agreed. I've seen lots of teams solve that in different ways. I think it's a nice challenge to team because it's kind of something they can come up with. Some things that I owe that we do is we have random coffee chat slots where you can just dial in and you get paired with someone randomly for a very short time box so it doesn't get awkward. Just five minutes and the room has some questions in it already, some conversation starters if you'd like. But there's also you need lots of ways. So we also have like a gaming channel where someone can suggest if they want to play an online game because there's a lot of, I don't know if you played foosball or ping pong in the office, that casual way that you come around a table together, you miss in a remote space as well. But there's quite a few fun real-time games you can play online which are super simple too and offer that to your teams as a casual way to connect. So I definitely agree that you need that space. Those are two possible ways, Jay and the others. So one thing that my team came up with is they just schedule social time in the afternoon. So it's basically just a suggestion that came from the team and it's basically not a force thing so anyone who wants to join from the team can join and it's just a casual conversation. There's no, because the team is quite close and they're smallish, it's easier to just converse about random topics than it would be when people don't know each other. So definitely depends on context team and what they come up with for sure. I see what is your preferred online game, I love board games and I've spent a lot of time investigating the equivalent online. So the ones I can recommend if you're looking for very quick games that are like ping pong, so simple with a low barrier to entry and you obviously need to be able to create a private room. Hacksball is a really nice one. We're going to be writing a blog soon because we've got a list of these. Hacksball is really nice. Tether I.O. is like a real time Tetris where you compete with each other. There's Tag Pro which is a capture the flag mechanic. Curve Fever which is where you make little like snakes. So those are the simple ones. If you're looking for a more complex game, I really like the Jackbox games on Steam. They're really fun and there's a lot of cool ones on the Jackbox party packs. If you're into more complex games, Dominion which is a card game has a free card game platform and that's also quite a nice one. There's lots others but those are my favorite. J. Hi, may I pose a question? Yes. In university business we are used to set meetings for let's say 9 o'clock in the morning and to start really at quarter past nine. This is the CT Comtemporal which is quite common at universities. Would that be an idea for an online meeting as well just to leave the people 15 minutes for social interaction? Yes, I think that would be awesome. In fact, if you're working with agile teams, I've seen a lot of teams, the typical rule of a stand-up is that it's short, it's time boxed to 15 minutes. We come in, we say our three things but in remote spaces I think allowing those to go on a bit longer and allow people time to chat is really important. So I think that's actually really nice to start a meeting intentionally with that in mind. Plus one from me. It's just something you can agree with as a team as well. We didn't talk speak about it. What we spoke speak about is agreeing on your session rules or your meeting agreements and that often you pull from the people in the meeting and not necessarily from you. And so if it's a regular meeting with the same people, you can come up with that kind of agreement. So you set that expectation for everybody in the meeting. So for 15 minutes it's just about hanging back. So at least Ibran's away. Cool. I think we're probably on the hour now or at least at the end of our slot and there's possibly other things happening. So I'll hand over to the room admin now to close or do whatever is necessary. Yeah, so thank you again. I think we'll close down and see you in the next talk. Thank you. Bye everyone.
In this talk we look to neuroscience to unpack unique challenges present in remote meeting interactions. Together we will look at a few research studies that can help explain what is happening in our brains when we interact with people in a digital space. With this deeper understanding, we will also share some practical ideas you, as a meeting organiser or facilitator, can use to shift your meetings in healthier directions. For many people, remote meetings feel very different to in person interactions. You might have experienced in remote meetings that people seem distracted or tense, there’s less engagement and the tone maybe feels a little colder. These conditions impact the quality of thinking within the session and as a result, the outcomes that are achieved. Why are remote meetings so difficult? And what can we do about it? In this workshop we will cover theory and share some practical ideas to overcome potential challenges which might be faced. No prior experience is needed for this talk. We will use some of our own stories to make remote working accessible to those who do not have deep experience in this area. We will explain neuroscientific concepts in a relatable way with many examples to help people navigate. This session is for anyone who cares about creating conditions for quality thinking in distributed teams.
10.5446/51768 (DOI)
Hello everyone, my name is Peter Zeitzsef and I am founder and CEO of Percona, the company which specializes in open source databases. And not a surprise, that is what I'm going to talk to you about. And I'm going to talk about the changing landscape of open source databases from two dimensions. The dimensions one is what is going with open source in general, right? And the second one is what is going on and changing landscape in a database, database technology. So let's talk about the free and open source software. And I will use those terms interchangeably for the rest of those presentations because I believe while there are some minor differences between auditory for those organizations for most of the people it is pretty much the same. So let's look briefly at the history of the open source software. Now one thing which was very interesting for me to discover as I was looking at that is what if you look at the history of the software, the first software was open. In the early days, the software and hardware were bundled and because software worked only on the hardware of a particular vendor, there was no point of keeping it closed. And more so the early adopters which were using the computer systems of the early days really could help to maintain that code, to fix bugs, add functionality and so on and so forth. And that was really valuable for the companies at that time. And also the changes were actually openly shared according to academic principles of sharing knowledge because a lot of early days computers were used in that space. Now if you look at development in the United States at least, their IBM had to unbundle software and hardware in part to antitrust lawsuit. So it doesn't have kind of complete monopoly and doesn't allow other folks to produce software for its hardware because they can't compete because software is kind of given for free. Another important thing what happens here is what computer software becomes a comparable item in the United States, which it was not. Now one interesting thing I was looking at this here, trying to get some more locally fraudulent information, it looks like things in East Germany continued this way for much longer time than computer software was not comparable and it was allowed to be freely copied because of that for many years afterwards. Well, according to Wikipedia at least. Then computer software becomes a comparable item that really created a multi-billion dollar software industry we know today and really create the proprietary software as a major class of intellectual property. Many companies rely on, wherever they are directly in the software business or just build something for use in house. And that comes us to the second stage of the open source software, which I would call that the era of romantic open source and free software. This is one of the leaders Richard Stollman, which I am sure some of you have heard about. And I think this picture because for some weird reason he is wearing the corner logo on his hand here. I mean, I don't know why the corner doesn't exist at that time. The best illustration for that era, I think it can come from this book, Linus Thorwald wrote a few years later, which was talking about why he created Linux and that was just for fun. It was not something like, oh, I saw the market's need and I thought I could create a multi-billion dollar industry. That was not really the reason. And a lot of reason to create, to contribute open source at that time were really driven by making the difference more than making the profit. Now if you look at the transition to 2000, we can see the rise of open source. That can be seen in different ways, right, from the famous quotes from Steve Ballmer, the Microsoft CEO at that time calling Linux cancer. Well, thank you for noticing. To the big exit at that time as Sun was, Sun acquired MySQL for a billion dollars. Right now there are many open source companies which have worked many billions of dollars, but at that time the billion dollar acquisition for open source company, that was phenomenal. No company until Red Hat really got there. That also was creating an era where a lot of businesses really recognize the value of open source and really starting to embrace the open source first approach. Their open source software is seen as a better choice than proprietary software. Everything is being equal. Now if you look at open source as a software engineer, an open source community member, you often will value that for a lot of intrinsic values such as openness and freedom. Well if you look at the business, we see open source often used because of the different reasons. From business side it's all about the costs and the ROI. And the open source software offers you both direct lower costs because you don't have to pay ransom to somebody like Oracle. That also makes your cost easier to get lower because a lot of engineers, software developers, folks operating systems now prefer open source and that means it's often easier to find engineers than better engineers if they love open source. Open source creates better productivity, it creates faster innovation because you don't have to rely on a vendor which may be moving at a glacer pace to innovate and to change a software you can do it yourself. For example, in MySQL space, again, database is very close to me, we could see that MySQL was moving slowly as it was digested first by Sun and then later Oracle, companies like Google, Facebook and few others would enhance MySQL for their needs independent on a vendor. And also we at Turcona could create and improve MySQL, the ocean called Turcona Server for MySQL, we could not have done that if that's the property software. And also what is very important here is avoiding software vendor locking. That is really very valuable, especially that is seen by the enterprises which has been there around the block for a long time, many decades. They have often seen as Oracle came about the market as this kind of nice company which saved you from big blue IBMs dominance and locking with hardware and software combined. But in a decade, in a decade, it transitions to be very well described by this code, what Oracle doesn't have customers, Oracle has hostages because people are so much locked in the Oracle database software, they can't really live that. And having a software lock in is really very important kind of strategic blunder or strategic weakness because this way you will have a vendor pretty much dictate your pricing and in many cases the technology roadmap which comes from that. Okay, now if you look at the next generation of open source, many companies have been started as a business as foster, right? It was no more about romantic ideas about the open source, but that was often about, okay, how can we get the company here to really make a lot of money. And a lot of those companies, they are of course venture funded, founded by the venture capital and with that, they really need to provide very high returns and very high returns fast. If you are familiar with the VC industry, it's not the industry which is interested in making the trees greener, right? And you know, this is the industry which is focused on making money for their partners, right? And particularly because they invest with very high risk, right, most of the ventures expected to fail, then they really need to get the high returns and very fast. Here is I think what is the interest in what has been happening. It was trying to mix the attractive message of open source which as I said and was understood by the businesses as the better choice, they classical their business growth strategies such as build the monopoly, avoid commodization, increase stickiness, build anti-competitive modes, all those kind of things you learn in the business school. But if you think that it is very much against the early days or traditional romantic open source of values because it's not about the creating stickiness, it's about choice, it's not about anti-competitive, it's about the cooperation with different folks, mostly often with your competitors. And also that really creates a huge variety of not quite open source software as you would see which can use open source, right, some other similar wording like open or something in or free in the marketing but which will not provide you all open source software. Such as open core, when you would have both commercial and sort of crippled open source version of the same software, you will have open source eventually, then you will have outdated version of the software as an open source, right, outdated, insecure and unmountained I would dive and then there actually supported version as appropriate. There is variety of shared source licenses, then you can see the code, maybe use that in some conditions but it's not fully compatible with open source, right. And also there is an open source compatible software, right. And let me talk about that open source compatible software maybe a little bit more because I think that is the most confusion for a number of people. Now open source software is a proprietary software which claims to be compatible with some open source software, right. So for example, in our database space, Amazon Aurora for MySQL is compatible with MySQL, Amazon Aurora for Postgres is compatible with Postgres, right. Now we have to understand what that compatibility means, right. And what I like to call that is a hotel California compatibility, right. Because it is compatibility which is really designed for you to be able to move to this technology easy but you will have a hard time moving back once you really adopted it to the fullest potential. That's maybe some additional functionality it provides, it's maybe extra performance, extra security, have ability, some other things, right, which you may not even know what you are, what you're using, right. And this is not to say what that such technology should not be used, right. Actually there is a lot of a very good open source compatible technology. It is just what you need to understand what you're getting in. And if you really value being able to not use that technology, you make sure your application is being also tested and validated which can be completely open source solution, which I know many folks do not do, right. And then when we try to do it a couple of years later, we find it is actually quite hard now to go back and avoid all that looking they created and learning so. Now if you look at that not quite open source software, there is a lot of policies here, right. Which really the FENJI capital and other really allows a lot more investment and a higher pace of innovation. If you look at the last decade, there is so much great stuff going on in open source, well, in a generally in database space, which would not be possible without that, perhaps, right. And a lot of that investment, it spills out to the fully open source solutions. Like for example, if you look at their MySQL space, right, of course Oracle is investing MySQL so they can sell MySQL enterprise subscription. But while doing that, they are doing fantastic their MySQL community edition, right, which gets probably 90% of their features or even more what are being developed overall, right. And the bad thing of course here is where it doesn't provide all the values of the true open source software. As the next stage started in 2010, well, likely Amazon was founded, IDWUS, the services was founded in 2006. But I think it really only started to significantly impact open source in 2010s, right. If you look at the cloud and open source, we see there is a lot of tensions out there, right. Here we've been following this industry for the last few years, where is a lot of, you know, fear can be so on about, hey, what the Amazon among other clouds are really just free riding their open source, open source software not given back or not given back enough. Well, in reality, I see that is what these are businesses, right, and they are using their open source software according to its license in many cases, which however was not really designed and prepared for the cloud age, right. Like for example, in a MySQL space, right, and the companies similar to MySQL, they're relying on the GPL software, say with IDA, if you want to build something as a non open source software, which is the Derivative Work Office open source software, you will need to buy the licensed version, right, as with that is called the dual license model, which is what MySQL pursued, or even better, you will have to release your modified work as an open source. If you think about a modern day with Amazon Web Services, the already named Amazon Aurora is really the derivative work of GPL licensed MySQL, but because software is not distributed, because the software is simply accessible, right, through a protocol while hosted and managed by a cloud provider, Amazon doesn't need to distribute their Amazon Aurora as an open source or pay Oracle any unlicensed fees, right. And of course, that is not really which was accounted for, right, and those folks designed their business model and their licensing strategy. Now what happened with that is what we see a lot because of this cloud danger, we can see a lot of and mainly public or and venture funded companies have changed their licenses, at least for some of their software product, from the open source to the source available licenses to be typically some sort of anti cloud context, just preventing the Amazon from having their software and running that, right. But of course, while it is positioned to everyone as hey, you know what we are this small guys and we are looking to have this, you know, 800 pounds gorilla Amazon Web Services from eating our lunch, it also impacts users and customers of software much deeper, because that really locks you in if you want to use a particular feature, right, or particular variant or particular way to use the database, namely from database as a service. Now another thing which is, which is interesting, which happened with the cloud is what now we have the things being bundled together once again, right. So if you think about before the cloud, if I would want to run their MySQL somewhere, right, often I would have a server priced so separately, right, then there would be some hosting provided charges, then I have the operating system I'm running, right, which I may go and support or may go to vendor, something like Red Hat and there is a MySQL, right, all those pieces, they are separate, right. And in the cloud, you are just buying cloud and that's all combined, right, as a single price right now, and you know more can really see very easily how that price come about. And that means what if open source software, we have been losing that zero price effect. What is a zero price effect? Well, is what we as a human typically have a very significant difference in our mind on the price point where something is free and something is paid, right, even if it costs a dollar, right, because just some act of paying for something versus using it for free, right, serves as a barrier, right. And I think that was one of the values of the open source, especially at where the, by the developers, right, and folks who, you know, don't get a lot of money to spend. Now wherever I am using the open source on the cloud, right, for example, I have a Linux powered VM or if I'm running a Windows powered VM, well, in both cases, I have to pay something per hour, it is just what the price is slightly different, right, but it's not so large. Now another thing which is very interesting, which has been going on in the space is the cloud databases, right, or database as a service, which is obviously much easier to use than do it yourself open source solution, right. So if you think about using Amazon RDS, for example, which you just can spin up in a few clicks and it's kind of provide some limited high-ability, automatically does backups, you know, automatically patch itself and so on and so forth. And just doing that for open source components, it is significantly easy and easy is obviously very seductive. Before a lot of developers, which you can be all under pressure to deliver the features where businesses need, right, the companies hire them need, rather than, you know, doing the cool stuff, right. And that is of course, is there the danger, which for open source software, if you will, the even bigger one, I would say is what's the not fully open source software is really quite accepted in the market. And that is not the problem itself, right, because it is fantastic to have variety of software available, right, property software, open software, some sort of not quite open source software, but the danger is, is what the value of open source software may be eroding. And a lot of people do not quite understand the difference between what is a truly open source software and what is open source compatible software and why you should choose one over another. Okay, now let's look at this current decade and see what's going on now and maybe make some predictions about the future, which as meals bore, says is not an easy thing to do. Now if you look at the open source, especially commercial open source software and right in the database space, we can see the great momentum for open source software. Red Hat remains their biggest success story for open source, right, with, you know, acquired IBM for $34 billion, right, but you can see there are quite a few companies in the space which have public and private, which have valuation of a very, you know, which you have a very significant valuation. What is also going on right now is that there is a lot of innovation in the open source database space, which I think is very cool, right, and again, many companies here, we don't release everything as an open source, but still they give a lot to the open source. Now as you all know, we are going through the worst pandemic in 100 years. And what they see is what is very good for open source, because it does accelerate digital transformation. In many cases, people need to go online more faster in many cases while they are also under financial pressure. And that really puts the open source to the front line, right. And I see that as the dust settles, that will be very helpful for open source databases. Now I spoke about the database as a service a bit already. Now what I think is interesting about database as a service is as folks, especially developers got to taste the fact that it is increasingly becoming their preferred way to consume the database software. What about open source databases, right, or not, right. And we see a lot of traction here. One of the values what this database as a service approach provides is also that it allows engineers to use multiple databases of technologies better, much in the application needs. Because if you think about that, if you take some, you know, exotic database technology and say, well, now to deploy that for mission critical application, I need to figure out how to provide high ability, back it up, maintain security, and so on and so forth, right, and a lot of stuff. Now you can probably deploy it in a few clicks and have their database vendor or cloud vendor take on all that maintenance on themselves. And you can just use the database as you want for your application. Reality though is while many cloud vendors they market as database as a service as fully managed, it tends to be over marketed. You would see what while database as a service takes away a lot of toil, you still need a lot of choices to make. For example, you may have heard and seen amazing number of database leaks for open databases right now over the last year or two. And a lot of that really comes to this database as a service stuff because you still have somebody to manage security, right? If you have a developer who, for example, can't connect to the database, right, so if you just make things easy for himself, make it publicly open to the internet, right, with some insecure password, well, guess what, right, that is a problem while currently it is, you know, typically developer responsibility not a cloud vendor. There are other issues which comes from developers choosing to use a database directly about the supervision, right? For example, the costs, right, you would often find because it's so easy to deploy the databases in the cloud now, right, or scale the instance, right, instead of requesting a new hardware and waiting for three months for that to be deployed, you can actually scale the instance size in a couple of clicks. We often see the databases running highly not optimized, a huge amount of waste. In many cases, with even rudimentary optimizations, you can reduce footprint of three to 10 times, right, which is just not bad, which is both bad for companies, budgets, and then for environment. Now, as I think about cloud, I think we have two models of the clouds which are being really evaluated right now, right, or approach it the same way. One is cloud as a commodity, right, that is often cloud, especially in the early days, was compared to something like electricity, right, and electricity is a commodity you can buy electricity typically from multiple vendors, and it has no difference for you, right, you can choose and change it very easily. You can look at the cost, right, or maybe you have some other preferences, right, like buying electricity from environmentally friendly vendor, right, but it's easy, right. And we can see example, for example, things like S3, right, which became, some that as an open standard, right, all the big cloud implementations, there are some many comparable compatible implementations like Minio, right, and if you're relying on such technologies, you can move your applications easily. The other thing, and that is where you would see a lot of cloud vendors are pushing you, is a highly differentiated cloud which is designed to lock you in on as many properties, technologies available from a single vendor. And if you look in this case, for example, what Amazon VEL architect is, it really talks about using a lot of those features they build, and which of course they are proud of, right, and they recommend to you, rather than having, for example, Kubernetes-based solution, which allows you to really deploy your application on prem or on a variety of clouds. I think a good thing here is what in the grand scheme of things, nobody wants to be in hostage, especially as I mentioned, if that's Oracle, Oracle example, I think many large enterprise has been here before. So what we see happening with databases from the change in landscape is what we are getting to what you can call their multiverse, right? You would see folks using multiple database technology, you'll have a little bit of companies saying, oh, we only run Postgres, right, or MySQL or whatever. Really it will be a large number of different database technologies, and also especially for the larger enterprises, they tend to embrace a multi-cloud and a hybrid cloud, right, for a variety of reasons. Some of that is the log-in, which I mentioned, other could be based on the government regulations and realities, if you're multinational, which has to do business worldwide, then not in every country is the same cloud vendors maybe available or acceptable for business or particular reasons. All right. Now if you look for this approach to a multi-cloud, we have many property resolutions available, right, Google has ANTUS, then there's solutions from VMware, AWS Outposts, Microsoft has their solutions, so on and so forth, right? But at the same time, we have a Kubernetes which emerged as a leading open source API for hybrid and public cloud. Now what is interesting with Kubernetes is because it has been initially created for stateless applications, the Kubernetes is often had a bad rep with databases, right? Initially it was very impractical and dangerous to run the very much stateful databases on Kubernetes, but the modern Kubernetes has become quite better and increasingly you can run the databases database on it. Now what is also interesting with Kubernetes is what you often would get, it's kind of not black and white, right, being completely open source with Kubernetes or going to the property resolution. In many cases you can use Kubernetes interface while maybe use some proprietary solutions to simplify the Kubernetes management and kind of really picking the right position for you of that open source and simplicity scale. So if you think about the open source database, what we see has been going on and what is important to continue doing. One is it's important to adapt for cloud native deployment in multi-cloud and hybrid cloud solutions, right? Not all databases have a very good solution for that right now, especially in open source space. Kubernetes API is obviously the API of choice, right? Now what I think the next thing which needs to be done is really to build the simplicity which is comparable to their integrated databases as your resolution. So you can really deploy the Kubernetes-based open source database with few clicks and have it automatically patch itself, backup itself and so on and so forth, right? The major values which open source databases, which cloud databases provide. So the question to ask, Rod, I would recommend if you are using databases as a service, right, to ask how you can get the most value with open source provide in this reality, right? And that may refine your choices. You may want to choose something which is kind of slightly more complicated but which gets a lot of value. From Percona standpoint, we are working according to rules right in this open source software world. We have operators for MySQL with Percona 3D cluster and for Percona server for MongoDB with MongoDB which allows you to run those as well as Percona monitoring management which doesn't do much for management yet but it really provides you functionality to monitor your databases quite conveniently. Okay, that was about the open source. Now let me talk in a few words about what is the landscape and changing going on in the database technology. Now if you look at the historical view in this case as well, you can see this kind of interest in resurgence for all, everything used kind of for old, right? If you look at the early days, there was really a lot of fragmentation in data model and query language, right? Frankly, a lot of database implementations were kind of built in the applications and you know, so data model could be essentially whatever application design as there have been a variety of query languages, right? They could experimented with and so on and so forth. Now in the 80s, 90s, we see the dominance of relational database and SQL as a query language, right? I mean, even in the early 2000s, right? I remember after.com,.com crash, right? If you look at their new technology, new companies coming up, right? All of them have been using, you know, just two relational databases, right? MySQL, Postgres, that was pretty much it, right? And there was not as much choice to chose from from mature database to begin with. Now in the 2000s, especially the 1000th, we have a lot of innovation in both the data models and query language have a lot of custom database built rather than a general purpose. Now let's look at the trends which are driving those changes. One is what developers and architects are empowered to make choices about the database to use, right? I already spoke about that. And the cloud makes using multiple database technologies easier. Another interesting, more technical thing to consider is one is we have a lot more microservice architectures rather than the monolith, which has been in existence in the past decade. And each microservice typically is able to make its own choice and what is the most appropriate data store for its needs, right? Which creates explosion of those databases. We also are much more at peace as a concept of a multi store when the same data may be stored many times, right? In so many cases now, instead of saying, hey, we'll store our data in let's say MySQL and that's the only database we'll use across all our teams. We can have their data which gets into MySQL and then through something like Kafka, it's maybe replicated to Elastic Search for FullTikSorge query, FullTikSorge, right? The queries which is suited better for MySQL and then also the same time to something like ClickHouse for large scale data analysis, right, or to Snowflake BigQuery, right? You choose it, right? All of those goes back to that trend of a multiverse where I have been speaking. Now we also talk a lot those days about SQL and NoSQL. We also kind of two competing terms, right? And almost like two religions which you have to follow. You can also describe it kind of as a relational and non-relational as an other terms. I think it's interesting to see. Now what is interesting about NoSQL is it is not some particular database technology, or a call even data store format, right? It is everything which is non-relational. And why is it so? It's interesting to see at the ranking of popularity which comes from DB engines, right? And if you see the fair ranking category, you can see what relational database is. It still accounts for a majority of the use, right? In this case, that is why this split of relational and non-relational makes sense, right? But then in reality you see a lot of different types of NoSQL database which are open purpose built. And this is only one slice of that, right? Because even for databases you may have a slice of relational databases. I will look at the role store versus column store, right? Those are typically both relational databases but have different use cases and so on, right? Or in-memory databases, right? Or GPU accelerated databases. You can see a lot of those other ways to group them. Here is another interesting thing which shows about the group, right? And you can see something like time series databases has been growing over the last two years, you know, just explosively. Where relational database is kind of a relatively stagnant, right? So you can see a lot of development and innovation happening beyond general purpose relational database. Now there are a couple of questions which I think are not answered yet and they will be answered for the next few years. One is about relational model and SQL. There are kind of two approaches to that. Some companies they break away and provide completely different data model, right? Like maybe MongoDB, if it's, you know, dog store. And then another provide extension to SQL relational data model, right? For example, JSON functionality in pretty much any relational database those days. Now interesting enough, what folks who break away from relational database and SQL, they often kind of have to then go back and add something similar to SQL as a language because it's so popular, right? For example, Cassandra has CQL, right? Or CAUGB has N1QL, right? And a few others which really leverage of ads dominance and so many people know in SQL. And I think the interesting is where are we going to run, Land, about the multi-purpose databases versus multiple databases which simply store the data multiple time and keep synchronized, right? So for example, there is a multi-modal databases, right? Which can say, hey, just store all the data in us and we can be relational database and key value store and grab database and so on and so forth, right? Or hybrid transaction analytical databases saying, hey, there is no need to move data to your analytical data store, you know, just run both your operational queries and analytical queries on the same database. Then other question which I thought is interesting is this kind of scale up and scale up, right? Scale up, that means, hey, your database is kind of focused on a single node for operations, something like MySQL, Postgres, right? There's a kind of traditional scale up databases or scale out. The database like UgoVide, you know, Cochro, GDV, Cassandra, right? They're really designed for scale out, right? For running a large number of nodes to begin with, right? And the question in this case is a very kind of room for both of them or they really are moving in a world where we will only have databases which really can be run on large clusters very efficiently and your classical scale out, scale up database will need to adapt or die. Now here is what is interesting going on with architecture trends with landscape which has been important. One is we have increasingly more databases and I think like all the new databases, majority of the new databases, they come with a locally distributed in mind, right? Not think about the key single server, think about how they can work on the cluster of servers right from almost day one. A lot more technologies are also looking at geographically distributed, right? And the reason out there is if you really operate geographically, you have a lot high latency like for example, between US and Europe, right? Or Asia and so on, right? You often would often have some data governance questions. For example, you may say, well, the data of US users have to be stored on US and data of the German users on European Union soil, that is something what distributed databases really think about. There is also a lot of work with cloud native databases which are really designed to run on the Kubernetes as their operating system, not the conventional operating system like Linux. One architecture trend which is not really connected to open source software, but in databases generally is the rise of the public cloud only, you know, massively scalable databases, you know, DynamoDB, CosmosDB, you know, and so on, right? Which you can only run on a selected public cloud and right, which do not have even kind of proprietary software which you can install and maintain on your own if you want to. Another interesting trend which we see kind of utilized for many new database engines in a separated storage and compute, right? So that means what you can scale your storage, right? In many cases, something like S3 comparable storage and you compute for processing independently rather than in the same way as you have with, you know, conventional databases. And also hardware acceleration is a big trend in modern days, right? A lot of that is focused on the analytical databases, for example, use of GPUs, but also on the storage level, right? We have like pretty much all the storage vendors, all live storage vendors have some sort of smart storage where you can offload some of the computational stuff, maybe compression, maybe filtering to the storage, right? And for me, the hope here is what we'll get some sort of open protocol here created because so far a lot of those technologies, they would use kind of appropriate drivers, right? And then you will have to really look into that some particular hardware from some particular vendor if you want to utilize that, which is well, not a great from a local standpoint. Well, anyway, as a summary, you can see what there is a lot going on in the open source database space, and it is really the great time to be involved with open source databases. So if that is your interest, please do that. And also, as I mentioned, as the big dangers I see right now is devaluation and lack of understanding what the open source software and free software truly meant. So help your friends to understand that. I think we all will be better if a definition of open source software remains the same. That's all I have for you. And now I will be happy to answer some questions if you have any. Thanks Peter for your talk. And if you have any questions, just write them in the chat. Okay, there's one question. In what point encryption at rest are in the OP world? Yeah. Yeah, I see that. Well, I mean, the encryption is obviously one of the important components of data protection and security at large. That is one component that in many cases is also a question of the compliance, which simply is being mandated by some of the policies for data processes. For example, if you work with health care data, financial transactions and so on and so forth. Now, you would mention also may think, hey, if you connect your database, well, we still can download the data, if you have a permission, but that is different. So if you look at the security, typically it's designed as multi-layer protection. There will be one way to protect the data, for example, stolen from the hard drives directly and then another to protect to ensure only authenticated users can access the data they should. Yes, so there is a question from Henrik about the high-jacking GPL. So yes, I think it's interesting what there was. Here's a question, how do we protect from clouds, high-jacking the open source software? And should there be a license for that? There was an attempt on this called AGPL license, which I think for years there was an idea what that should protect you from being used in the cloud in a way. But in reality, it wasn't found sufficient. For example, we know one of the famous adopters of AGPL, MongoDB, later moved to SSPL, non-open source, the open source license. So we'll see how that evolves. I think the tough thing with this cloud question is to understand, to see what is it, is it the question of the software or is it a question of service and how you can draw the line appropriately there as a customer, you can get a value of open source software by being able to use it in database of the service form from multiple vendors, not being locked into that, versus protecting the original offer interest. I think it's a very interesting question which has not been figured out. Again, I appreciate all the innovation in this case. I think even though it's not quite open source licenses, which we have many, they are still better than proprietary software for many users. Any other questions? Well, the question about Redis, wasn't it AGPL before we decided it's not their model? So the Redis itself was and continues to be permissively licensed, I think that's BSD or some similar license. It is extensions which implemented by Redis Labs, which are subject to the license change. I don't remember quite what license which was changed for them, but that was first there was attempt to add something called the Creative Commons clause, which was added to a batch license in their case to making it not quite open source, but I think there was a blowback in community because it confused people. It looked like, well, that's almost as good as a batch license, while it's not. It's like your honey, if a little bit of rat poison doesn't have a lot of qualities of honey left. So they did write in and changed to completely their own license in the figure. Right. If you look at the MariaDB, they have this approach, BSL, business source license, which falls into what I call the open source eventually. I frankly do not like this license, Redis approach. I like it much less than open core. And here's why. If you look at the MySQL approach, you know what the MySQL community edition, while it may be limited in certain cases, it is very well maintained by Oracle. You see bug fixes, most important, you have a security updates coming on and so on and so forth. And if you need those additional enterprise features, well, you can choose to pay Oracle, right, or you can use alternative implementation, for example, from the corner, or you can go and build it yourself. Now what happens in MariaDB case, if it be a sale, for example, max scale, right, what you have only choice if you want to use version, even for lately basics features, you need to buy property version, right, and the only way you get open source is something which is unmaintained, right, you are not getting security features for that and so on and so forth, right. So the open source version of that eventually open source is typically unsafe to use, right. Of course, in theory, you can say, well, you can hire the team and, you know, maintain that outdated version yourself, but that is impractical for most organizations. Well, that's right, right, so if you look at open source, eventually there is a period of so many years, right. I mean, it can be three years, it can be wherever the copyright holder chooses, right. I don't remember what is it in, for latest releases in MariaDB, max scale and other companies. Now, to be clear, the MariaDB server, the server company, it remains GPL, right, it remains completely open source, but there is some extra software like max scale, which is not open source in MariaDB case. What is the next database that your corner will support? Well, we are looking at some databases, right, but we are not making any public commitment to that yet. So the question about MongoDB kicked off all the major distribution, do you think they will survive that? So, this is something I think to understand about how many venture funded open source companies operate, right. First, you are really getting a lot of stuff free, right, and that's how the venture capital is spent. And then, after you get what is called critical mass, then this user base is starting to be heavily monetized, right. So MongoDB has made that choice, as a spell, after they have, in their opinion, got a critical mass, right, so, right, that's fine with them. And the MongoDB focus is a lot at the MongoDB Atlas, as I can see the cloud based variants, right. And at least a short term, we can see what the license change at SSPL, they can still maintain the growth, at least as a revenue, right, that is a public company, so you can still act. I think that is a very interesting in terms of what that's, what will happen in the next three to five years. I think, in fact, what MongoDB is now not open source, that creates an opening for some really open source document database, right. And I think it will come out of somewhere over the next, in the next two years, we'll see that. Okay, well, I don't see any more questions coming in. Okay, yeah, thanks for your talk, and thanks everybody for being here at the FrostCon online edition. Feel free to join our closing in a few minutes, and next room one. Oh, there's one more question. Well, I think, did we reach a tipping point? Well, I think is what the cloud providers, that is important change which forces a lot of open source community in the database space, especially evolve in their business model, right, so that is surely there, surely there's a question, and I think that is a very important transformation of change in the industry. Okay, well, it looks like that was the last question. So thank you everybody for attending, that's been a pleasure.
A discussion on the changes, trends, and database technologies that are going to impact your business in the next 12-18 months. In the current technology landscape, we have a lot of great innovation happening especially when it comes to Database Technology. A few examples include introducing new data models such as time series or graph, which focus on solving SQL at hyper-scale problem, this has been an elusive solution and scale was becoming synonymous with NoSQL environments. We now have a new Cloud-Native database design coming to market using the power of Kubernetes as well as employing Serverless concepts. In this presentation, we will look at database technologies changing trends and what is driving them as well as talk about changes to the Open Source licenses and Cloud-based deployment and emerging class of not-quite Open Source Database software.
10.5446/51670 (DOI)
So I will talk about how spring temperatures shape the Western light values transmission in Europe. This work was led by Giovanni Marini, who unfortunately cannot attend today's presentation. So I hope it will be as interesting as it was him presenting. Next slide, please. The outline of the presentation is a little brief background on the West Nile virus, and then I'm going to talk about the previous work on West Nile transmission in Emilia Romagna, which is a region of Northern Italy, and then how we try to extend it to the whole of Europe and try to investigate how spring temperature may affect humans below. Next slide, please. So West Nile is a virus which is mainly circulating among mosquito and birds, where mosquito act as a vector, and given that the mosquito species, which is the primary vector also bites other animals like horses and human. It can happen to have an incidental infection to again horses and humans, which are dead at host. So an infected human is not able to infect the mosquito vector. And so to is not able to further transmission. So this kind of disease is a symptomatic, but in rare cases, people infected can develop severe neuroinvasive implications. So most of the time, the disease goes undetected by the public health system. And when the system starts to see cases, the circulation is already widespread or happening for a quite long amount of time. So next slide, please. So obviously, given that there is a mosquito vector involved, temperature plays a key role, because it affects mosquito dynamics. Several works have shown how warmer conditions can speed up the development and the life cycle of the mosquito and also increased a biting rate to the host, which can be bird, in some cases also human or horses rather mammals. And similarly, also temperature plays a role by affecting the pathogen transmission as warmer condition may decrease the incubation period in both the host or in the vector and increase the transmission probability to other hosts. Next slide, please. So as I said, there has been a previous work led by Giovanni, which was in which he analyzed the West Nile virus transmission in northern Italy, in particular in the region of familiar Romania in the northern Italy, which is characterized to be the region where the low land of the river is located. And as you can see the points are the traps deployed by the surveillance systems. And in this work is a top of entomological model coupled with an epidemiological model. So in the first entomological. So we analyze the life cycle of the mosquito and the model was calibrated to mosquito captures and as that input data was the temperature by this model one is able to estimate the expected mosquito bundas. The second part, coupled with the epidemiological model which models the transmission between birds and mosquitoes. In this way, one can compute the prevalence on the on the board on the mosquito which has the vector and from that compute the human risk given the mosquito prevalence. And the model was computed. The model was calibrated on the data in the past year, and it's very interesting to note how the 2018 was a very exceptional year. And also, later in the season in about 100 narrow evasives human cases, which are a lot more than the total number recorded in the previous year. This is not specific to the Emilia Romania region but it's also happened in 2019 in almost all Europe. So, here's the next slide. These are some results, as you can see spring was exceptionally warm in 2019 and this seems to have amplified that the was not virus transmission at the beginning of the season. So, having a higher transmission at the beginning of the season resulted in more circulation during the season and therefore in a lot of more cases than observed by the public as system. So, next slide please. So, we observed that pattern in Emilia Romania region without trying to explore if this pattern is also observable interest of Europe. Again, there's a problem of data, which are not available as in Emilia Romania for the rest of the Europe. So we collated data from the CDC, which report cases from all Europe for the period from 2011 to 2019 at the nuts three level. As you can see in the upper right graph 2018 is a very exceptional year for all the Europe in terms of West Nile virus transmission. So we taking consideration this data and we applied a statistical analysis to quantify how any spring temperature can influence you must be lower at an European scale. So, please switch to the next slide. The model we use was to consider the laboratory number of laboratory confirmed cases for an area that's three level and year specific here. We assume that given the CDC record only report positive cases when an area has no cases so is not present in the database, but it has already as add cases in previous year, we assume that there is a zero reported cases so we inflate the database with zero. And also we take into input the land surface temperature as an independent variable inter statistical model. Can you please go ahead. Yeah, thank you. The model was a zero inflated negative binomial models. It's a two part model. The first part is a binomial part which models the present of absence of cases. We use this, we use this framework because we inflate the database with zero. And now we think we it was important to model also this part. And the second part of the model is modeling the number of positive cases recorded. So as you can see by the formula and equation here to dependent variable was the number of laboratory confirmed cases, while the independent variable where the average spring temperature in the period 2003 2010. And these act as a gradient differentiating south to northern Europe. And the second dependent variable was the temperature anomaly so we normalize the temperature as a difference between the average temperature so this variable indicates characterized each year if it is a warmer or colder year with respect to the mean. And then we also consider the temperature of the area. And we also consider if the West Nile was already circulating in that area for that year, in the previous year. So we can see simple and explorative statistical model to just see if we can find some relationship between temperature and the number of cases recorded by the CDC at European level. So next slide please. Indeed, there is a correlation as expected. And as you can see in the upper panel. So the average number of cases expected conditional to the temperature in the average temperature in the upper left panel, or with respect to the temperature anomaly in the upper right panel. As you can see, there is an increasing pattern. So the warmer, the on the average spring temperature more cases and the warmer specific springs. So the higher number of cases. On the other hand, the lower panel models the probability of serving as zero or a positive number of cases. As you can see, again, the role of temperature. And that the more warmer the temperature, the less likely to serve as your record. So next slide please. So to conclude, the previous work on the middle of Romania region and the following work that is now published in the tropical. European level shows that there is an association between previous was not detection and larger number of cases in the following years. It's more likely to have an higher circulation in usually warm region. That is further in as if the specific spring temperature or a fire is warmer than the average temperature. And that considering a warmer temperature spring in particular warmer temperature as a nearly warning signal signal for West Nile virus humans below. I think that thank you for your attention. And if there is any question. I would be glad to answer it. Thank you, my dear, my apologies for some technical problems but we're still on time. So please questions. Maybe I will start off somebody has a really, really question. I think that's me. Nice talk. Thank you. Your first point there previous West Nile detection is associated with larger outbreaks. That implies that, I mean, if you've had a large outbreak, let's say in 2018 that conclusion would suggest you're likely to have another one in 2019 is that right now surely not. A large outbreaks is probably catch phrase for the presentation. You just say that. This is the next year. Obviously, quite. Were there also far more places, as opposed to cases. The Nile is spreading across Europe and teams that most of most area. It's when it's a pair then it's bound to stay. And I think that's the problem going on with the detection. They think because given that is not really symptomatic for most cases is difficult to detect but if you look for it in the mosquito vector when you are an area that results positive for what's not by circulation. And in time, you will pile up cases and when again, 2018 was an exceptional year but probably condition align warmer temperature previous circulation, a lot of vertical transmission from the previous you have this outbreak happening. At the moment was 2018 was the highest level, the peak level. Observed. Okay, thank you. We have one more question from Veeam, I think. Yeah, thank you. I thank you for your nice talk on West now I see for your easy DC data you use data from 2011 to 2019. And there's no specific reason not to have 2010 to the database this was also the start of the West now to the large West now to circulation in Europe a large outbreak in Greece. And we also very specific climatic conditions. 2010 was also a very strong winter. I think I don't know what the spring temperature did. Was there any specific reasons, not having 2010 in your analysis. So, you know, the temperature data from 2003 2010 as a reference. And then we started the database from the independent for the dependent variable from the 2011. As you say, I think the 2010 is the first year of the appearance of us nine in Europe for the DCC so we, so we discarded the first year, I think. Because it was also a very large outbreak. Of course, very, I think at a time many focusing on Greece so so it was included in your reference point. No, at the moment, no. Okay. Okay, thanks. Any other questions. I have a question. I was wondering, do you plan to include other variables except a temperature like rainfall. Yeah, we would like water areas. The model. Yeah, it would be nice. I think it's a future direction for research to include more variable. Again, this was to explore an association we find the founder on Emilia, Romania region in the previous work. But yeah, with my virus circulation is a very complex system and mechanism and cannot be explained only by temperature. And yes, indeed, we should include more variables also relating to the climate change and future temperatures. Have you have you tried doing that, like just extrapolating the model you fitted with the future like projected climate scenarios. It would be nice to do that but we, we were not so much confident about the reliability of the result. Because again, it's an indication that with temperature things in terms of West Nile are going worse and worse. So, if we expected to be an increasing temperature. Then we have to expect an increase in West Nile virus transmission and circulation across all Europe. Temperature most likely affect the transmission of us now because it acts as a boost the mosquito population, which act as a vector because warmer condition can speed up development of mosquito and also can speed up the incubation period within the vector of the viruses. So, if you have a warmer condition, most likely, you can have more aggressive or more abundant vector population. And if the viruses is introduced in the system or in the area, then it can be circulating with much more velocity. So, yeah, that's the main reason I think for temperature being very important in the modeling and in understanding of the West Nile virus circulation. There's a lot of different factor because we know that birds are the primary host but there's not so much information which kind type species of birds is really maintaining and sustaining the viruses in the environment. So, we're able to winter through the mosquito population, which entering the oppose and then by vertical transmission and start the cycle in the next season. So, we have an observation of the opposing mosquito vector, which are infected with West Nile, founded during winter time, and at least in the USA, there are also horizontal transmission between birds, but there are no evidence of that because there's a lack of lack of studio in Europe about the role of birds in the maintaining or even wintering of West Nile. So, yeah, I think we're wintering at the moment is probably due to the mosquito vector, which carry on the virus to the next season. I don't think there is much of relation between the two viruses, but for sure, the spread and the spread and the present situation with COVID is worsening a possible West Nile virus outbreak because the restriction measurement at the moment for COVID, again, limit the movement of people, render more difficult, maybe do some entomological control or intervention, and also I've stressed the public health care system so maybe symptomatic West Nile virus cases could have less access to cure or be confused with the other. I mean, if you develop a fever now, there's likely a fear that you have COVID rather than West Nile, so you may have one diagnosis at the moment. So, yeah, surely COVID have impacted the health care system and is reflected to West Nile, but it's too early in the season to predict or have an idea how much the West Nile will circulate or well. But yeah, if you're going to see another outbreak like 2018, it will be worsened by the COVID situation.
Mattia Manicai is a researcher at Fondazione Edmund Mach. His presentation will be centered around one of the most recent publications he has contributed to, titled “Spring temperature shapes West Nile Virus (WNV) transmission in Europe” (https://doi.org/10.1016/j.actatropica.2020.105796). Mattia explains how spatio-temporal conditions will shape WNV transmissions, why WNV circulation tends to be higher in warmer regions, how the impact of change of temperatures due to approaching spring time on WNV transmissions can be predicted, and why new results in conjunction with previous findings can serve as indicators for the reliability of surveillance systems and WNV’s overwintering capacity.
10.5446/51394 (DOI)
So hello, good morning, kaplach for this big Klingon. Yeah, so hello, my name is Buonil and I am a web developer. I work at a company called Kamoio which is like TellyNor's version of Xerox Park. I should mention we do a lot of cool stuff. We are hiring, just saying. That's going to be my only commercial message today. Because I want to talk about hipsters. Do you all know what hipsters are? Hipsters like better music than you. They dress cooler than you. They generally are a lot better than you, at least according to themselves. And of course translating that to our business, hipsters tend to be functional programmers. And there's a reason for that. Functional programming to the uninitiated sounds awfully complicated. It has all these strange words that you don't understand. That's quite possibly the functional programming hipsters don't understand, but they know how to say the words so they can make you guys feel stupid, right? I'm sure you've been on the receiving end of that a couple of times. So my aim for this talk is to teach you guys those words and give you a bit of an idea of what they mean in the simplest sense so that you guys can go back to your co-workers and use those words on them and make them feel stupid and make you guys look smart. So, wait, I actually got a costume for this talk because hipsters come with ironic glasses. The thing is, though, this isn't the first time I've done this talk and finally some recordings of previous versions have been released. So I've actually noticed that when I wear these glasses I look like a grandmother. So the last thing a hipster wants to do is say something old, especially looking old, so I'm going to skip this. If you don't mind, I did look horrible, right? Yeah, I did. I can tell. Right, let's get started. So first of all, I would like to introduce you to what I have identified as the four great paradigms of programming. First of all, there is the original systems program, the imperative programmer, the C programmer, staying close to the metal, preferably even writing machine code. These guys, yeah, one of these guys actually invented regular expressions in case you don't recognize them, that's Ken Thompson and Dennis Ritchie, who invented C. These guys looked apart on the systems programmer as well, as they should, because you all know that the length of the bed is proportionate to the level of, the skill level of your C programming. Also check out the cool hipster glasses on Dennis there. Then there's the object oriented programmer, that's you guys most likely. Picture here in the natural habitats. I'm sure you've been there. I certainly have. And yeah, that looks kind of stupid, that's the idea. So that's the functional programmer. The functional programmer is smarter than you. The functional programmer knows lots of fancy words, and has fake losses as well. And finally, that's the logic programmer. The logic programming in general is perhaps something that my kind as a species isn't quite evolved to be able to deal with it yet. So let's just leave that for the next century or something. But a long story short, there are two kinds of developers you want to be concerned with. There's the functional programmer. Wow, the functional programmer. Who in addition to having an amazing moustache, of course, is smarter than you. And there's you, the Muggle. I can tell I'm in a room full of object oriented programmers. You guys relate to this, right? Actually, this is a Java JXC, C-Sharp, a problem of smarter. So, yeah. Now, functional programming mostly, the language of functional programming tends to have leaked into the general consciousness from a language called Haskell. Haskell was designed by a committee of academics for the pleasure of other academics. In fact, one of the guilty parties is sitting right up front here. I'm not going to call him up, but thank you for Haskell. And the reason Haskell has this wonderful language of confusion is that Haskell derives most of his ideas from something called category theory, a branch of mathematics, that, yeah, it's actually that kind of category. Ispe has not gone into detail. Let's just remember category theory. And so functional programming in general. I think I'd give you an idea of how that came about because it's quite an interesting story. So, the first programming language is where designed to be kind of a slight abstraction of a machine-cade. On the other hand, there was this guy called Alonso Church. He was a mathematician. And he invented something called the Lambda calculus. There was this other guy called Haskell Curry. Remember that name? It's going to recur a bit throughout this presentation. And he kind of got involved in the design of the Lambda calculus as well. The idea is that the Lambda calculus is composed only of functions. We don't mess with concrete, messy things like values and numbers. The Lambda calculus is just functions. A function can return another function or it cannot. It doesn't bother with strings and data types and horrid things like that. So, let's just go with a funny picture. The Lambda calculus eventually evolved into something called the type Lambda calculus, which perhaps looks a bit more useful to us human beings, which of course Haskell Curry violently disapproved of because he's a mathematician or a human being. So, it turned into something slightly more useful as I said. And incidentally, there was this guy called John McAfee, who just as an experiment took the Lambda calculus, made his own version of it. His own notation for it. And one of his students decided to, hey, let's try and write an interpreter for this version of the Lambda calculus, just for the heck of it. And it turned out that that actually worked. He had designed, without really meaning to, a language based on pure mathematics, a programming language based on pure mathematics. That was in 1958 and the language was called Lisp. That was the very first functional programming language. But what does that mean, functional programming? Well, it is essentially based on the concept of first class functions. As I mentioned, the Lambda calculus has functions that only return other functions. So, functions are values. Functions can be passed in as arguments to other functions. They can be returned from other functions. They can be stored in variables, much like what you might be used to from object-oriented languages. Except, see, Sharper course has caught up and added the idea of first class functions. After a while, Java is still trying to catch up. They're getting them, but certainly took them a while. So, let's have a look at first class functions. Let's do some load coding. Now, of course, I should inform you of the rules of load coding. Since I'm not Venkat Subramaniam and hence not perfect, there is a chance that I'm going to make mistakes. You need to spot those mistakes. This is only you. If I screw up, that's your fault. Okay? Great. So, this is JavaScript. In fact, it's not quite JavaScript. Notice this type annotation on the function. This is in fact TypeScript, Microsoft's JavaScript with a type system. So, I figured that I'm going to go through some examples that get progressively more complicated. So, I figured the type annotations might help you just follow along. So, this means that we define a function called hello, which takes no arguments, and it should return a string, as you can see it does. So, calling this, we'll see that it returns the string, hello, everybody. This should be no surprise to you. But this is actually how I originally discovered function programming a bit by accident. Instead of calling the function, I just did away with the parentheses, and suddenly, wow, the function is actually a value. So, what can we do with that? Say we could perhaps take that value, assign it to another variable. Can we call that? Yeah, we can. We just pass it along. And how about this? This is a stretch, right? This can't possibly work. Let's say we define a function which takes an argument f. I might even specify that it should be a function. And then just call f. And then if we go call this, hello. Wow, didn't that work? Oh, sorry. Of course. I'm too used to proper function languages where the return is implied. That's my excuse, right? See, that way. So, we actually just took a function and passed it into another function, and the other function could just call that. Like, aliased. So, let's first look at functions for you. You can pass them around. Anyway, you like, their value is just like numbers and strings. That's pretty cool. And it's self to certain use cases that are quite common in function languages. This is very common. I'm sure you've seen this in your own languages, but we're just going to implement this from the start. Functis. And this is where category theory is starting to shine through. Because a functa is a funny word, right? You haven't heard that word before, probably. In some languages, a functa simply means a function. I believe prologue does that. But a functa in category theory is explicitly defined. But this is my definition, so that's probably a mistake, but never mind. A functa is a collection of f that can apply a function f from x to y over south to create a collection of y. That should be quite clear to everybody, right? Yeah. So, the idea of a functa is that you have a collection type. You just go with lists from now on, because lists are easy to understand. Or arrays and JavaScript, they're the same thing. It's just a collection of values. Indexed collection, perhaps. So, it's a post-your function that takes a value of the type that the list contains. And it returns something else. So, given that the list supports functas, the idea of functas, we have a map function that takes your function that operates on single values. And it applies that to the list so that we magically transform it, like running the function on each value. I'm just going to show you that, because it's very easy to see. Suppose we have a function caps. Take a string, returns a string, and turns it to uppercase. We have a list of ponies, and we're going to implement this map function. It's quite easy. We get a function n. It's specified that the function takes one argument of any value, returns another argument. Now, it returns a value of any type. And it takes a list of any type. And it finally returns a list of any type. So, we need to construct a new list. We're just going to do a stupid for loop to implement this. So, essentially what we do is, for each step through the iteration, we push onto the new list the result of calling func on the ith elements of list. Now, I'm going to actually remember to return this. So, just a follow, iterating through the list, applying the function for each iteration. So, we have ponies. Do you like ponies? Who likes ponies? I know you do. There are actually people willing to admit it. That's very brave of you. Usually, I ask, I can also audience if there are any bronies. And everybody just goes, no, not me. I like cars and power tubes. But very brave of you. So, we have a list of ponies. Rainbow Dash, Pinkie Pie, and Twilight Sparkle. Let's try using that math function on our ponies. So, we first specify the function that we want to apply. And then we specify the list of ponies. Now, what's going to happen when I evaluate this? Historical question, right? It's going to go up a case. Yeah. So, you see that the caps function, it works in only one pony at a time. And we just apply that to the whole list. That is a functor. Well, the functor is the combination of the idea of the collection and the math function. So, nothing's ever easy in category theory. But what we're doing here is essentially a functor. So, I mentioned that you've probably seen this before. In fact, JavaScript comes with this math function defined on the array object. So, we could have just gone ponies.map caps. And this is JavaScript's native implementation. It does the same thing as you can see. Most people call this mapping. If you heard of map reduce, this is the idea of the map there. Of course, you guys are going to be functional programmers, so you call this functors. At least now you can. I'm just going to go through another common, very similar use case. A filter function. The idea is that we have a function that instead of returning a new value, it returns a boolean, quite simply. So, this is a test. A test if a pony is too cool. Of course, rainbow dash is too cool for most things. So, it returns true if the pony is not rainbow dash. Because the too cool test is intended to filter out ponies that are too cool. Hence, for rainbow dash, it should return false. Let's implement that filter function. So, we have the same structure as the map, except that instead of just applying a function, we call that function on the ith elements of list. If that is true, some languages like to be explicit about this, right? In JavaScript, everything is truthy almost, so we're not going to bother. We just, if this is true, push the unchanged value onto the new list. And then we return that. If we apply the too cool filter on ponies, we should have a list without the coolest pony. Because rainbow dash doesn't want to be in a list with may earth ponies and unicorn ponies, right? Space, just as an experiment, we flip this. So, in an absurd imagined world where rainbow dash is the only pony who isn't too cool to be in this list, we see that the list now only contains rainbow dash instead. So, we're essentially just filtering on the truthiness of that function. All right, now reduction. Reduction, that is the other part of MapReduce. The idea is that you have a function which takes two arguments and of course returns one value. And the idea is that we, for each element of the list, we take the accumulated value of going through our iterations. That is A in this case, and the point in this ration is B. And we just run that function through that and it kind of accumulates the result. This is hard to explain, so I'm just going to implement it. Once again, new list. Or in fact, let's call it result, because it's not necessarily a list. And notice that in addition to the function and the list, we take an initial value. So, we're going to start with that initial value. You should have pointed out that time frame. I'm watching you. And we need an iterative variable. Now, what we do is for each step through this iteration, we call func on the current result and the current value. The current value of the current position on the list. And we just put the result of that back into result. And then we return result. So, what can we use this for? For one thing, since I have an add function prepared, I think you can probably guess that that's probably useful somehow. Let's suppose we have a list of 1, 2, 3, 4 and 5 numbers. And let's start with an initial value of 0. Care to guess what that does? Some of the numbers. Some of the numbers. It essentially just starts out with 0. It goes 0 plus 1 is 1. 1 plus 2 is 3. 3 plus 3 is 6 and so on until you get the total. So, that is a reduction. Interestingly, reductions are also known as casomorphisms. This is casomorphism theory for you. We didn't go with simple words like reduce. To be precise, a casomorphism is a right reduction. So, we should have been iterating from the end of the list to the front if this one is actual casomorphism. But close enough. At least now you can use the word casomorphism and that's what matters, right? So, reduction has an interesting property. Because it turns out that the map operation is in fact just a special case of a reduction. We can do map with just reduce. Let me show you how. Suppose we have a function called shoutty map reducer. That takes a list of strings as its first argument and just a string, as its second argument and it returns a list of strings. Now, this simply concatenates onto the list A. A list containing the uppercase version of B. Nothing more fancy than that. Let's reduce that. Shoutty map reducer on ponies and the empty list. Looks familiar. I just essentially mapped the case function of that with a bits of boilerplate. Now, we got something called higher order functions. Higher order functions, that means essentially that functions can take other functions as arguments as we've seen. It also means, more interestingly, that functions can return other functions. In essence, you can write functions that are function factories. A function factory factories, if you're more familiar with Java. The idea of higher order functions is that you can use functions to modify other functions or create functions for specific purposes. I'm going to show you. In fact, we could generalize the case of the shoutty map reducer. Once again, we have the caps function. Here's shoutty map reducer. Just check that that still works. On ponies and the empty list. That works. Now, suppose that we want to generalize the way that we turn a function suitable for mapping into a function suitable for reducing. Let's just write a generic map reducer. Or map reduce factory. Now, let's just go with map reducer. It takes a function with one argument that can be anything and it returns anything. What it actually returns is a function that takes two arguments as you will recall the function you pass into a reduced showed. Function. Very good. You're learning. I like that. Okay. So that function essentially does the same thing as the shoutty map reducer. It concatenates the results of calling fonk on B onto A. So we should be able to... Let's just get rid of this reduce. We should be able to say reduce map reducer of caps on ponies and the empty list. So that should take the caps function that only takes one argument into something that takes two arguments and emulates map through a reduction. And it does. Yay. That's a high order function. Now you know what they are. So combinators are essentially high order functions as well. But the idea of combinators is that they modify a function in a fixed way. So it's less of a factory function than kind of an operator, perhaps, on functions. You might have heard of combinators already. Things like the y-combinator, which is a combinator for getting you funded. And there are other combinators like the s-combinator and the b-combinator. And there's a whole branch of logic devoted to combinators. Let me just show you the stupidest example possible. So I have a function square. It can take a numeric argument and it returns the square of that argument. I'm going to invent a combinator for you. I'm going to call it the three-combinator. So the three-combinator works in a function. So we need an argument for that. It's a function that takes one, actually, one numeric argument and returns a numeric value. And it returns a function that takes a number and returns the recel to calling function on that number plus three. So applying the three-combinator to our square function, that gives us a new function as expected. Let's store that square and three. Now, what does this do? 28. So the three-combinator essentially just takes any function that operates on one number and adds three. That's super useful, right? You can see the points of combinators already, I'm sure. Actually, let's see something a bit more useful. So if we just do the function three, shouldn't it need the function taking the longest function in terms of the number? We would be able to add three. So if we cast in a function that gives the length of the string, we can still do that. Absolutely. The three-combinator only needs the return value to be a number. That's correct. So I thought I'd introduce another combinator called the null-combinator. That should be a little more useful. Suppose we have a function pony type and we have a data structure representing a pony. Pinkie Pie and she has a type. Every pony has a type. Pinkie Pie is a Pegasus pony. Suppose we call pony type on pinky. That returns Pegasus pony. Does that look right? I thought you guys said you were bronies. At least there should be somebody. Pinkie Pie is an earth pony. He's stupid ignorant people. Right, now it looks better. Great. So suppose we just fetch Pinkie Pie out of a database. And that seemed to work. Now let's suppose we try and fetch Rainbow Dash out of a database. Rainbow Dash is obviously way too cool to be fetched out of any old database. So let's suppose the result of the database query is null because it failed. What happens if we go pony type on Rainbow Dash? Exception. That's an error. That's not good. So the null-combinator is the idea of wrapping a function in an automatic null check. We have a function. It takes one argument and it type and returns anything. And it returns a function that takes an argument x. If x is null, return null. Else, return funk o x. Okay. So null-safe pony type is null check applied to pony type. Let's just check that it still works for the expected case. It does. It still fails here because we're not using it. Now instead of crashing your program, it just returns null when null comes in. Of course, you should do better than that in your own code. There's something called error handling, right? You should at least display an error message instead of just swallowing this silently. But as I know you guys tend to do that anyway. At least now I've made it more comfortable for you. You can thank me later. Okay. So let's look at the function. This is where functional programming to me starts to get really useful. Composition is the idea essentially of reuse in functional programming as opposed to inheritance in Java and C sharp and things like that. The idea of composition is quite simple. Suppose we have two functions, one the function caps, which we know, and a function height, which simply says hi using my input. So let's make a compose function. Takes one function, takes one argument, and another function that takes one argument. And types don't matter in this case. We return a function that also takes one argument. And returns the results of calling function two on the results. Oops. Wait. There we are. I don't know what happened there. Calling function two on the results of calling function one. Oh, I see what I did. Why didn't you guys tell me that? Undo, undo. Wow. There we are. Func one, x. So the idea is that we take two functions and kind of string them together, causing, um, returning a function that is kind of a chain of those two functions. So suppose we call caps on hi on everypony. So we shout hello everypony. And the decision is the idea of composing those two functions together, hi and then caps, that gives us a function that we can just call with everypony. And it does the same thing. Now this might not seem immediately useful to you. That's because it's a contrived example. However, most functional languages come with basic functions that you can use as building blocks to kind of compose computation chains using the only other composition. So it's actually very, very powerful. And it doesn't just involve saying hi to other ponies. But you can do that too. So, applicative functors. I really included this because it's a great word, applicative functors. Starting to sound very, very complicated, right? There we are on applicative functor. This is my naive implementation of the special case of an applicative functor on lists in JavaScript. Don't look at it. It looks horrible. I'm just going to show you how it works. The applicative functor is a functor which also has, in addition to the map function, an applicative map function. And the applicative map function lets you specify more functions and more lists, not just the one that the map gets. So suppose, do we have high end caps, I think we do? Suppose we go high end caps on ponies. Notice that instead of, what am I doing? How come you didn't spot that? You're sleeping. So instead of specifying one function, I specify a list of functions. And ponies as arguments. So what happens is that it tries to do every possible combination of the inputs. So we see that first it does high on every pony in the list. And then it does caps on every pony in the list. Yay. Now let's do something a bit cuter. Suppose we have a function hug. It takes one pony and another pony. And what do you think they do? Hugs. P2. Right. So input two ponies and they hug each other. Of course, then we're going to need more ponies. More ponies. So, help me out guys. I'm looking for the main six. We have three already, right? Remember that pink twice sparkle? Applejack. Applejack. Very good. Awesome. Okay, two more. No? Rarity. And this pony is really popular. You should know this. My little? No. But that's the general case. Flutter shy. All right. Now we have two lists of three ponies. Let's use the applicative map function to make them hug each other. So I just passed the one hug function. I still need to pass it in a list. But in addition to ponies, I pass in other arguments. More ponies. Because hug expects two arguments. So I need to pass in two lists. Now what happens? Every pony in the first list hugs every pony in the other list. Every possible combination of that. That is how the applicative map function works on lists. It's not very useful. But it's very cute. Applicative functors are a lot more useful given other data types than lists. Just take my word for that. If you're interested, then read a book on Haskell or Scarlet. But at least ponies hugging each other. Right? Okay. Curring. This is the most overused illustration for curring ever. So I'm using it ironically. Curring is the act of transforming a function of several arguments into a chain of functions of one argument. That will yield the same result when called in sequence with the same arguments. I think actually the formula below that is look clearer than the text. The idea is that it's supposed to have a function that takes three arguments. Curring is the act of turning that into a function that takes one argument and returns a function that takes the next argument. That returns a function that takes the next argument that returns the results. And the result is the same as calling the initial function with all three arguments. Curring, as you might have guessed, was named for Haskell Curring, who invented the idea of curring. Just that it turns out that a couple of years earlier, some guy called Moses Schoenfinkl actually invented them. But word of it didn't reach Curris is until later, much later, if Currie even know, I suppose he must have. I believe he was living in Stalin's Russia at the time, and apparently communication between Stalin's Russia and the rest of the world wasn't doing that great. I don't know if he was in a gulag or something like that, but sounds exciting, so let's imagine he was. So if you come across somebody actually knowing about curring and talking about curring, and kind of offstaging you because then you wouldn't look quite as smart as in the word curring. Just point out that's Schoenfinkling. Okay, now let's look at Curring. I already wrote an implementation for you. It gets strange because JavaScript is a strange language. The thing is, for one thing, in JavaScript, you have no way of telling how many arguments a given function expects. So my generic curring function takes the function. It also needs to know the arity of the function. That's a nice one. Not a pony, that's rarity. The arity of a function simply means how many arguments it takes. So rarity three takes three arguments, quite simply. I've written an add function that takes any number of arguments, which lets me curring that. Suppose we want one that takes three arguments. So let's just check that this add function works. One, two, three. Six, looks good. Let's curring that. Okay, that returns a function. It takes one argument, one. That returns another function, which takes one argument, two. That returns another function, which takes an argument, three, which returns a result of the computation. So that was a currid function for you. Doesn't seem immediately useful, either, does it? Actually, there's a special case of curring called partial application. That's probably what you're going to come across most often. And that is starting to become useful because it lets you kind of encapsulate some kind of state into a simple function by pre-filling arguments. Let me show you. Suppose we make a variable one and two and, which is the partial application of add with arguments one and two. We get as a function, which we can then call with one argument and get one and two and three, right? Six, or more than one argument because add takes and a number of arguments. So four in addition gives us 10. That looks right, yes, 15. So partial application is a way of kind of pre-filling arguments to a function. If you think about it, that lets you actually store some kind of context or state even in the function that your partial application factor is creating. Trust me, that is super useful. But this is why partial application is interesting because most people actually confuse curring as a whole with partial application, which means that if you catch somebody talking about partial application and calling it curring, you can call them out on that and make them look stupid. You can buy them at Baers later if you like for that. Okay, now the big one. You ready for this? Monads. Monads were, of course, invented by the German philosopher Gottfried Bilhelm von Leibniz. And those monads have nothing to do with monads in category theory. They just happen to have the same name. Also, Leibniz looks awesome and hips the glasses, so I had to put him in. In a more practical sense, monads in category theory were introduced into computer science. With Haskell, of course, by a man named Philip Wodler. In fact, the only guy I didn't have to photoshop, fancy of course you want to. And monads were introduced into Haskell because Haskell is a purely functional language, which means that functions in Haskell can have no sign effects. They can have changed the environment around it. They can't even print Hello World because that would be a sign effect. So, monads were introduced as some kind of magic to cheat and get around the fact that Haskell is a totally pure language and would be completely useless if you couldn't even print Hello World to the screen, right? So, Philip Wodler discovered that you can actually cheat and do that in a purely functional language by using monads. The IO monad specifically in this case. And, monads are defined by the three monad laws. These are not the three monad laws. The first monad law is you do a talk about monads. The second monad law is you do not, yeah, you get the idea. There's actually some truth to that because it is traditional because monads are so hard to understand. And those of us who do understand them like to remain an elite. So, it is traditional that once you understand monads, you're supposed to write an extremely confusing tutorial on monads, and then you can push it online and make sure that nobody else understands it. This has been done thousands of times already. Right. But the best part about monads is that monads are also known as claisly triples. Doesn't that sound awesome? Claisly triples. I love that one. And it's just synonyms. So, feel free to substitute claisly triples for monads if you don't feel your sound confusing enough. Now, I'm going to show you a simple use case for monads. I've already established that we are using lists for most of these examples. So, I'm going to show you quite simply how to use the list monad. And our use case is going to be the elements of harmony. As we all know, each of the main six ponies is associated with her own elements of harmony. So, I have essentially just created an HTML list, an ordered list of ponies, with sublists containing each ponies corresponding elements of harmony. And I want to take that list as an input parameter for a function and return the list of elements of harmony, which means that I should be... Oh, wait. Let's just look at the list and explain that properly. So, the idea is that I need to pick out the children of that list, right? And then the grandchildren of that list should be the elements of harmony, because the children should be each pony, and each pony has a child of its own being the elements of harmony. If you know HTML, this should be obvious. Otherwise, just take my word for it. It's a tree structure, right? So, I have a children function. It takes a DOM element that would be an HTML tag, and it returns a list of the children of that element. Let's check that that works. I'm picking out the list of ponies from the previous slide, these are HTML slides, by its ID, and storing that in ponies. And then I'm calling children of ponies. So, ponies is the whole list. That is the element that goes into this function, and it should return each pony in a list. And it does. Yay. So, hey, you guys remember composition, right? So, obviously, to get the grandchildren, we just compose children with itself. So, compose children and children, and just call that on ponies. Nah. That doesn't work. Why does that work? Anybody can guess? Because children takes one element in, outputs a list of elements. So, for the second call to children, this gets a list, and thus crashes horribly. Now, the idea of the list-money-ad is that it can take a function like that, and kind of, what's the term? Can you say, list that into the list-money-ad? At least, the list-money-ad is in charge of turning this function into something that would be composable, given the case of one element in a list of elements out. Now, monads consist of two things in the simplest forms. The unit function, which takes a value, and returns a list of values in this case. The idea is that it turns the case of the one element into something that would be composable, meaning just a list containing that element. Function bind. This is the other part. The bind, that's a whole lot of function. That takes the function that you want to make composable, which, as we know, takes a value that is just one element, so not a list of any, but just any, and returns a list. This is exactly the signature of the children function, except generalized. Now, the bind function should return a function that takes an argument list, which is a list, instead of the value, which is just one item, right? And it returns a list, obviously. Wow. I almost ran into characters. I didn't, in fact, run into characters. Yeah. Okay. Ooh. I went into allman-style indentation by accident. Oh, no, wait. So this creates a function which takes the expected list, and we're going to have to do some iteration here. We start out with an empty list, and we need an iteration variable and a for loop, and for each element of the input list, we concatenate onto out the results of calling function on that list. So this looks a bit like the generalized, the function I used to create, what, the generalized map-reduced thing, right? No, wait. The idea is that for each element of the list coming in, we essentially expand that element into a list using the input function, and when we are done doing that, concatenating each list together, we get a list of all that, and we just return that. Trust me, this may be composable. Shall we check? So that means that, first of all, we need to turn ponies, which is just one element, into something that is suitable for inputting into the composable function, which takes a list, and of course, turning the element ponies into a list means just making a list of the one ponies element, which we do using unit. Now we use bind to make children composable twice. And now, this is quite work-on, that I said, but at least it's not crashing, and it is actually returning the grand children. The problem is, in HTML, sub-lists actually are lists, and then there are items inside that list, so we're not looking for the grand children. We are looking for the great grand children. Now, that gets us the elements of harmony, and what I just did there was using the list-monad to turn the children function into something that will be composable repeatedly. And thus, ladies and gentlemen, you understand monads, don't you? Yeah. The thing is, the list-monad is perhaps one of the very simplest forms of a monad, and they get a lot more interesting, so go out and read those monad tutorials, and I'm sure you'll be enlightened, yeah? Anyway, because if even that doesn't do it for you, Haskell never stops giving. My favorite part of Haskell is this thing called Zygote. Zygote is the morphic, pre-primorphisms. That is a real thing in Haskell. I love that. And that text you see there, that is all the documentation you get. This is in, I believe it's in the Haskell Standard Library at this point, which is totally awesome, right? So this, in addition to some code examples, that is all the documentation. It is assumed that Haskell developers would need, which makes it even more awesome. In case you're wondering, a say-hisomorphic, pre-primorphism with both say-gomorphic and hisomorphic properties, right? It's obvious. Come on. Right, so if even that doesn't do it for you, there's always logic programming. Thank you very much. If you want to check out the slides or play with that raffle thing that I am going to, the link at the bottom, that is for the live slides. You can just open that in your browser and play with a code. Right, questions. Yeah, I figured. I don't usually get questions. That's the one. Wouldn't it have been more hipster like from the implementer-produced recursive? Sure, you could do that. Because, yeah, this is JavaScript. I was actually trying to be kind and using a for loop, because I figured most people wouldn't know for loops and not necessarily everybody. I mean, you guys look very smart, but probably some of you don't really know too much about recursion. Or those people in the other room, perhaps, I don't know. So, yeah, you could certainly do it by recursion. Mind you, JavaScript doesn't do take all optimization, so you might play the stack doing that. Other questions? Is it a hipster trick that I don't understand how I should use this in my daily work as a go-up program? Sorry? I'm going to repeat the question. Is that a hipster trick that this... You want to repeat the talk? No. I mean, you're making me feeling stupid, right? Because I understand how to use this in my daily work. I'm sure you're going to be confused. Don't worry, you look very smart. Yeah, of course, because I'm doing this to make myself look smart. That should be obvious. I just said Zygothic morphic, pre-promorphism. Do you know how much I had to practice to be able to pronounce that? You have no idea. So, yeah, of course this is a hipster trick. More questions? We have about three minutes left. Do you think... well, obviously it would be against the whole hip-pre-promos. Do you think it's possible that in a few years someone will invent a language that's in natural language to describe functional programming that doesn't sound hipster and trying to be smart for the sake of the people? I'm sure that from the academic point of view, people aren't being delivered to your skill, but they are being obscured. Is there going to be real functional programming for the masses without us having to become academic? So the question essentially is, is it possible to formulate a language that is actually understandable describing all these concepts? I don't understand why you want to do that. Well, certainly people have tried. I mean, just the other day I learned from Dunsime that moan-eye comprehension in F-Shop are deliberately called asynchronous workflows. That's an inaccurate description of asynchronous workflows, but essentially they are moan-eyes. They are kind of a euphemism for moan-eyes. And that is becoming quite popular. In Scala, moan-eye comprehensions are called forecomprehensions because that makes it less threatening. So certainly people are trying to take the edge of the language of category theory, but I don't see the point. I mean, would you want muggles flooding your wonderful library tower? I certainly wouldn't. In F-Shop they are actually called computational expressions. Computational expressions. I guess that would be the more generalized form. Just don't mention asynchronous workflows. That smells like an IOMO-NAD to me. Or a promised man. Right, any more? Do you consider using a moan-ad for the way in null checking you did in the middle? Just an actual live usable example of a moan-ad? Yeah, the question is, did I consider using a moan-ad instead of the null-check combinator? I certainly could have. I mean, that is another very simple example of a moan-ad. The maybe moan-ad, exactly. Yeah, that is a very good use case for moan-ads. Of course, you can also just do a null check, but in Haskell that isn't the way to do things. Obviously, that would be too easy. I'm looking forward to the talk about continuation of that in style. Yeah, I'm not sure. I mean, that sounds a bit... The question, by the way, was the comment was that he was looking forward to the talk on continuation of that in style. I think those schemas aren't to be trusted, so I'm not going to do that. Right, we are essentially out of time, so unless somebody has a very short question, I'm going to call that out of day. Thank you very much for coming. APPLAUSE
Different programming paradigms serve different purposes. Systems programmers prefer tools that are dumb, imperative and close to the metal. Enterprise programmers prefer tools which foster complexity, increasing billable hours and the client's dependency on the developer. And, let me just come clean and admit it, functional programmers do it for that delicious feeling of superiority that comes from looking down your nose at the normals in their caves banging together for loops and mutable state to make fire. Treat yourself to a crash course in the vocabulary of functional programming: lambdas, higher order functions, purity and immutability, the infinite opportunities to throw the word "monad" in the face of anyone who thinks an ironic moustache is enough to justify all that self-assured smugness these days. You'll never have to lose a programming argument again once you've learned where to casually toss terms like "applicative functor" and "Kleisli triple" into the conversation. This is the War of the Hipsters. Arm yourself now, before it goes mainstream.
10.5446/51398 (DOI)
Hello, so let's get started. My name is Christian. My contact stuff is there. This is the only slide that's going to be there, so we're switching over to Visual Studio very soon. So it'll be code, code, code. But I'm going to just sort of briefly introduce what I'm going to talk about. So I'm going to talk about the Nancy web framework, which is a.NET-based web framework inspired by the Sinatra web framework, which is a Ruby web framework. Sinatra is probably the thing that you might use in Rubyland if you find that Rails is too heavy. So that's also the thing with Nancy. It's very lightweight. It's very, just tries to make it easy for you to work with. And there are not too many concepts there. So we'll actually cover quite a few of them just within this hour. It's meant to be easy to work with and not get in your way. And you sort of have the freedom to do what you like to do, which is all things that I really like. That also means that Nancy is quite code focused. So that's why I'm showing it off in Visual Studio instead of OK, whatever. Instead of doing it in a lot of slides, because, well, I think in this case at least, code speaks louder than slides. So I'll wait with the slides and over to Visual Studio. So I've cheated a little bit, not too much, I hope, but a little bit. So I've set up a project and a test project as well. So in Nancy today, we are going to build just sort of a little back end for a to-do application. And I've just stolen the Arch typical to-do MVC application done in Backbone.js. So it's a single page application where you can sort of hammer in to-dos. And you can check them off. And that's all of it. And that meant to demonstrate the JavaScript side. We're not going to look at the JavaScript at all. We're just going to look at how could you set up a back end for this, because it's something that throws back some JSON, and it expects to get some other JSON back and forth them. So that's what we're going to build. So what I've set up here is, well, this up here was just a Nancy, no, sorry, an empty HPNet application, just straight out of the box. And then I've added a couple of Nougat packages to it. I've added Nancy itself. And then I've added Nancy.hosting.aspnet. So Nancy tries to or succeeds in actually separating your application from what you want to run it on top of. So there's a Nancy framework itself that's not dependent on ASPNet or anything like that. So to actually run your Nancy application, you also need to have a hosting. One of the hosting features, one of the hosting options, is a spinette. And that's just pretty easy to work with. You can just F5 in Visual Studio, and you can put it on IAS, or you could put it in Azure, or on AppHab, or wherever you want. But you could also go with doing self-hosting, just as Web API also does now. So you could just run your little Nancy application from within a Windows service or a WPF application or whatever. So if you have a WPF application deployed across an enterprise, and you want a backdoor into that, if you want to be like the NSA, but just within your own organization, so you want to be able to look inside what everybody's doing with your application, you just have Nancy and their self-hosted, and then you have HTTP endpoints. And you can just go there and look at stuff, well, except for firewalls, maybe. But if you control those two, right. So those are the Nougat packages I have in this, just Nancy itself and the HPNet hosting. Then in this project down here, I also have a couple of Nougats installed, and there's also some test code in there already that I'll sort of use to drive out what I'm going to code up. And the Nougats I have in this project are, well, just X-Unit. That could be anything. Just because I like X-Unit. And then the Nancy dot testing in Nougat. So probably already sort of becoming clear that Nancy uses Nougat a lot to distribute the code. So you're very much, you generally don't really go to the Nancy site to bring down Nancy. You just use Nougat packages, and then they have sort of different parts of Nancy you can install as Nougat packages. Which is also sort of one of the lightweight parts of Nancy. It's very modular. You can sort of change everything and pull stuff out and put something else in. So also if you want to have a ViewEngine in there with Razor, for instance, and Nancy comes out of the box with another ViewEngine called Super Simple ViewEngine. But if you want Razor, then you can just install a Nougat. Nancy dot ViewEngine is just Razor. And then you have that. Nancy out of the box, as we'll see, comes with an IOC container for dependency injection. But you can also change that around. And oh, I don't want to use tiny IOC as Nancy does out of the box. I want to use Autofac or whatever, or Castle. Then you just install a Nougat, and the Nougat changes things around, and that's it. So that's very much the experience with Nancy for a lot of things. So I've also set up a couple of other things that we're going to need. First of all, this is our domain model, our very large and very intelligent domain model, which is a to-do. And it's just straight up the same properties as the single-page front end is going to have in its JSON representation of the to-do. So this is just really driven by how that backbone application works. I also want to be able to store these to-dos. It's a back end. I want to be able to store it. So I defined this little interface for a data store. And so you can get back all the to-dos, and you can try to add them, or you can try to remove them, or you can try to update one of them. Pretty simple stuff, and not really too interesting. And then I've implemented that against MongoDB and I have a Mongo instance running back here. You don't have to be able to read all that stuff. It's just there's a MongoDB running here. And so the talk is not about Mongo, so I'm not going to show off the code in there. Right. So let's get started, actually. So I'll start off with this test for what I'm going to call the home module. I'll just bring that in there, and move the mouse. Yes, now you can see the code. That's good. And usually there's something red here, because that refers to a class that doesn't exist. But let me just explain what's going on here, because I want to show Nancy from the testing perspective, partly because the testing support is so cool. So I think we should focus on that, too. So what Nancy does testing, that NuGet package I mentioned before, it brings us, among other things, is this browser object, or this browser type. So I can use a browser, and that's going to be my system under test. And that is supposed to be sort of my virtual browser. I want to say that don't think so living in them, and web driver, and GUI testing, and stuff like that. This is just an object that knows a bit about Nancy, so it can give what for Nancy looks like in HTTP request. It can give that to Nancy, and then they'll flow through the Nancy stack, but only the Nancy part. Not the actual HTTP and network stack and all that. And actually not the HPNet hosting part either. It just goes straight to Nancy, gives that what looks like an HTTP request for Nancy. Nancy doesn't know that it didn't come from a real browser, but just comes from here. So it's all in process, and it's fast enough. And you don't have all the trying to find the right element, and click in that or moving mouse, or anything like that. So this is, in my opinion, much more suited for most tests. I will just do the request through Nancy, reach your application code, or our application code, and that code won't be able to tell the difference either. So you get sort of a very nice outside view on your application by testing through this, but without all the obstacles of testing through the UI, which is horrible. Also what the browser allows you to do. It has an API that is, well, it's HTTP. And that's where exposing stuff for HTTP. So we want to be able to see that. So I can say, my system under test, I want to do a get towards slash. And the get is an HTTP get. And I'm doing it to the root of my application. So these paths are obviously relative to the host. I get the actual or the response back. And there are different stuff on the response. You can get to the body and expect what's in the body of the response. Or as I do here, you can get to all the headers. And just for ease of access, you can easily get to the status code. And for this one, I'm just checking that it didn't blow up and it returned a 200 OK. So it's not too much. So first of all, we need this home module. Oh yeah, might actually also explain this funky bit of syntax here. So when Nancy starts up, it wants to make it easy for you to register your application code with the framework. So it tries to discover where is your application code and then just auto-register that. So you don't have to do a whole lot of configuration. So we can rely on that from test as well, actually. Or we can be a bit more specific, which I do like to be in the tests. And I can just say, OK, for this test, I want a browser browser working against Nancy. But Nancy, with only this module that I'm going to explicitly tell you what it is. Just a bit, I'll talk about what a module is. So it doesn't see all of our application. It just sees the home module because that's what I actually want to test against. And you can put more in here or whatever. But I think this makes it quite explicit what the scope of this test actually is. So I like that. OK, so we don't have a home module. So we will add a home module. Can I remember? Yes, and it's just a good old C-sharp class. Home module, yep. Seems like I can spell. Get rid of those. So we have a public class here. And now to make this a Nancy module, I make it inherit. Well, Nancy module. No big surprise. And so Nancy modules are really at the core when you work with Nancy. This is where you actually code up your actual application. So what you do in a Nancy module is that you set up the routes that you want your application to be able to respond to. And you set up the handlers that you want to use for responding to that. So in our case, we need to be able to do a get against slash. So we tell it that here in the constructor. So this is telling Nancy that, OK, set up a route at slash and make that accept the HTTP method get. And we also want to be able to do something there. And what, damn keyboard. There we go. And we'll just give it a lambda expression telling what we actually want to do. And the stuff we want to do is we want to return. So if I was strictly just trying to make my test pass, I could just return HTTP status code dot OK here, because that's what I'll show you then. But we're going to be a little bit more loose here. And I'm going to say that I want to return a view. And the view I want to return is just index.htm, as far as I remember. So that refers by convention to a file called index.htm, obviously. But that can be placed in a few places. And one of them, Nancy by convention, will look in your views folder. You can alter these conventions if you will. You can just set up your own, or you can run with Nancy's. And Nancy is pretty nice about error messages as well. So if it doesn't find a view that you're trying to reference, it'll actually tell you all the places it looked, which is pretty nice. It's like, oh, that's why. So you can decide if you want to move it, or you want to put it into, or if you want to set up your own convention, of course. Yeah, it looks like in views index.htm. And that's just a whole bunch of some HTML. And that basically just loads up that backbone application that we saw before. So along with the JavaScript, I and CSS and stuff I have in here in content, that's the backbone part. So we're not going to worry about that. I just stole that off GitHub. Right. So let's see if it compiles, first of all. It did. And let's go back to our test. And let's see if that runs. And it did. Yay! Hello, world in Nancy. And we got to see Nancy modules. Right. That's a success. So that's not quite enough. But now we are able to return that HTML page that will load up the backbone single page application. That's actually fine. You can actually work with that. But it's just all the calls to the back end fails. You won't see that in the UI, so perceptive. But if you look in Fittler or something, it'll be quite obvious. So we can't actually do anything yet. So as I said, we want to build a back end for this. So driving out the next bit a little bit apart here. So I've just put a little bit of setup here in my constructor for these tests, because I want to reuse it across more of the test functions. You see, again, I'm instantiating a browser here. Setting up a module. Now I want to test against the to-dos module. The to-dos module is the one that actually going to handle all the requests from our single page application. Then we also want to set up something more. I want to set up a dependency, specifically used in this test. So as I said, at its core and out of the box, NINC gives you an IOC container, a version of control container, and it wants you to use dependency injection. So everybody familiar with dependency injection? Yeah, good. And here I just set up a dependency specifically for this test, and I'm setting up a fake data store. And that's just a mock of the I data store that we looked at before, which I'm doing with fake it easy. Again, that's not really the case. You could do it with your favorite mocking framework or by hand or whatever. Set up the dependency. That's fine. And then we have a little bit of test data here. OK. So I want to test that when doing a get to slash to-dos. Well, it's just what I do here. I want the correct number of to-dos to be returned. So I set up my mock here to, well, first of all, this is just creating a list of to-dos. Then I tell my mock's data store that whenever anybody calls get all, just return that list of to-dos. Then I do a get. And by the way, I say that I would like to have the response be application JSON. So this is going to be used in JavaScript, right? So I want JSON back. I don't give me HTML or I don't give me anything else. I want application JSON. So you can also see it during the request. This is really close to HTTP or close. It looks close to HTTP. We're using the method here. We're saying what path. And then here we can set up headers and stuff like that. Say which type of request. Do more around the request. So you can really do a lot of things from the tests here, which is nice. Well, since I'm asking to get application JSON, I'm expecting that I can deserialize the response, the body of the response as JSON. And then I can just see, is it the length that I expected? So that's all well and fine. Again, this won't compile because we don't have a to-dos module. So we will create a to-dos module. to-dos module. There we go. Again, get rid of that. And obviously, this is, again, a Nancy module just before. And like before, we are going to set stuff up in the constructor. So this time, actually, everything that's going to be in here is going to be under this last to-dos part of the application because that's where we want to get the to-dos back, and that's where we want to post new ones and stuff like that. So, OK. There we go. Go away, please. So we just tell the base class, the Nancy module, that everything in here, go away already, should be relative to to-dos. I guess that should be fine like that. So now we just sort of namespace what we're doing in here. And we don't do the get again. And it's just going to be to slash. I guess then I maybe not, I probably don't need the trailing slash up there to-dos. And we want to do something here again. And what we want to do here is we want to retrieve all the to-dos that are stored in our data store right now. So I'm just sort of going to expect or going to assume for a minute that I have a data store and that I can say get all to that. And then I will actually just return that straight as it is. And I need a data store. So I'll take a dependency on the data store. Data store. There we go. And what's it telling me? Return type is void return type of what? Is void. Just compare maybe it tells me something more. My constructor is returning. Yeah. Yeah. Thank you. Yes, that's better. Yeah, yeah, exactly. My constructor was returning that's true. And I meant the lambda to return, of course. Thank you. So yeah, so this is just returning the list of to-dos from my data store. And it's just returning that back into Nancy. So as you see, the lambdas that we give here to get, for example, that are going to handle our request, they can do quite different things. And Nancy will just pick it up and do what you want. So here I'm just returning an innumerable of my domain objects. And that's fine. It's going to be fine. And over here in the home module, I am using Nancy to locate a view and we're returning that. That's quite different things. I could also just return a string, hello world. That would be the body of the string. And Nancy would assume that it's text HTML as content type. Or I could return just a status code. OK, or 404, or whatever I wanted. And again, so Nancy will just take these different types and make them all into a Nancy response. And be able to output that. And when you don't specify a whole lot like we don't do here, Nancy will just sort of assume a few things. For example, it will assume that it's OK. So the status code will be set to OK. And actually Nancy will also, in this case, it will assume that, OK, you're returning some objects. You probably want to put them into the body of your request, or sorry, your response. And you want to say 200 OK as the status code. And when I serialize these objects into the body of the response, you probably want to respect those other tests. You probably want to respect that accept header that the client set. So this should be serialized into Intugation, because that's what the client asked for. OK, so crossing my fingers and seeing if this now passes. It did. Oh, yeah. It looks like three tests, just because I'm running it three times with different amounts. So zero to do, one to do, and 42 to do, because you always need 42 in there. And then just to demonstrate, I have this test down here. It's almost the exact same thing. The difference is here. Now I'm asking for XML, because I still love the 90s. And I expect to be able to deserialize that as XML. And let's try to run that. And that passes as well. So thank you, Nancy. Nancy did the content negotiation for me there, looked at the accept header, and then just serialized based on the accept header. So that's nice. Out of the box. OK, so now our single-player test is done. Now our single-page to-do application is able to get the to-dos that are already on the server side and it'll display those. We can't do anything with them yet. So let's move on to, we'll find for time. True. So we also want to be able to create new to-dos on the client side. So a bit of setup of the mark I have. And then that will be a post. So again, this is the browser object that we look at. Until now, we've used it with Git. But we can also use the post method of HTTP, do it to the same place, and we say this time we just serialize the body that we want to post. We serialize that as JSON. So again, this is setting up a fake HTTP request and HTTP post, and it's putting a body onto that post. And the answer from that, we want that to be created and we want the body in the answer or in the response to contain the to-do that we just posted. OK. Right, so this should fail at the moment. We haven't done that code, right? So that fails as expected. That's nice. It's set. See down here, the status code was not found. No big surprise. We haven't set up anything to handle post to that endpoint. Back over here. And so it's just more of the same, really. I do that. And you can do this with all the methods on HTTP. So if you have the time, at least, we'll show this for put and delete, and it's really more of the same stuff. But you can also do patch and hit and options and stuff. You can set that up. Or you can leave stuff like options and hit out, and then Nancy will just handle that for you. So in the post here, well, we want to do slightly more than just one liner. So first of all, let me just open up a scope. And first of all, we want to get a hold of a new to-do. So that's the to-do that is being posted into us. So we are going to ask Nancy to try to create a to-do for us based on the request. So model binding in Nancy is done like this. In our case, we have the to-do in the body. So Nancy will look in the body here and deserialize that into a to-do if it can. It could also look other places like form values if I remember correctly. I'm pretty sure it will. So once we have that, we just work with our data store again. If, let's say, if not data store.try. And I've just made it so it returns false if it wasn't able to put it in there. And it's our new to-do. We want to put in there. Eh, come on. And so if it's not able to do that, we will return something. And in this case, we'll just return an htp status code. And it will be bad request. And otherwise, we'll assume that it is added. So how much did I assert on over here? I asserted that it was created and that it's, oh, yeah. I think I said something wrong here. I think I said that I was asserting that to-do was in the response. I'm actually not. I'm just asserting that we tried to put it into the fake data store. So inside of this method, we're just messing about with the fake. I've hidden that because it's sort of ugly. So we will actually just return an htp status code here as well of created. Oh, you see, you have all the status codes just named out here, so you can get to them easily. And you just need that. Let's see if it compiles. And it seems to compile. And go back, run this. Right, there we go. So actually, once again, the bind here sort of respects what's on the htp-p request. So in our test here, we explicitly set that using this helpful method that we wanted the body to be JSON. So not only does that serialize my to-do object there, the a to-do, as JSON, but it also sets the htp-hattice to indicate that the body is JSON. So the code we have over in the module would also work if we serialized into XML and then posted that up and set up the content header server correctly, the content type correctly. So again, Nancy respects this content negotiation part of things, and we'll just pick up. So out of the box, Nancy can handle JSON and XML in this way. If you want to do something more, it's really just a matter of implementing two different interfaces, one for the ingoing part for data binding and one for the outgoing part, the serialization to responses. Each of those interfaces have two method needs. And then you can just put that somewhere in your assembly and Nancy will scan the assembly, find the implementation of the interfaces, and they'll just hook into this again. So if you went over to Service Stack maybe and grabbed hold of their implementation of protocol buffers, because that's a very efficient protocol, and you plug that right into those two Nancy interfaces, which is quite easy. It's about 50 lines of code combined those two, and that's even including namespaces and empty lines and stuff like that. Then Nancy will pick that up, and you can do the same here. You can put a protocol buffers serialized to dos into the body of a post and set the content type. Nancy will pick that up. These are a slide for you. It's fine. So it's just another example. It's really easy to just hook into the Nancy engine and just make stuff still happen. And stuff like that sets up a nice separation of concerns as well, because this is sort of my business logic here. I want to pull out the to do wherever it was in the request, and then I want to put it in my database. I don't want to mug about with the exact format here. It's really nice that that's just an infrastructure thing that Nancy handles for me. Nice. OK. Looking at time, that's fine. OK. Yeah. So I'm just checking that I don't want to allow saving the same to do twice. We'll skip that. OK. So as I said, this is just really all the same. We want to delete something, and we want to do that based on the ID of the to do. So every to do has an ID. And again, because this is how the backbone example expects things to be, it wants to do an HTTP delete to do slash the ID of the to do that it wants to delete. So I'll do that, and then I'll ask my fake if we try to remove to do number one from the data store. I'm saying that must have happened. This is just an assertion with the to do, and then I'll do the to do. This is just an assertion with fake it easy. So to no big surprise at this point, I think this is just more of the same. There we go. And so we just say delete. And we say, so well, it's slightly different. We say slash, and then we want to grab hold of the ID. So we do this like that. And then actually now we want to, now I'm going to give sort of the argument for our lambdas a proper name because now I'm going to use it for something. That's why I just call it underscore up there. That underscore is a valid name of an identifier. I like to use it when I'm not actually using that variable for anything because it just sort of visually disappears. So that indicates to me that I don't care about this right now. Whereas when I actually use it, I want to give it a name that I can read. So we need the ID out of there. So I'll just go into the parameters here and I'll just say ID. So parameters here is a Nancy type or has a Nancy type, the type dynamic dictionary. So that's a dynamic object to no surprise. And one of the things you can do with that is that you can just use dot notation to grab hold of whatever you're capturing up here. So what you put in here as the path that I want to handle with this handler, it can be a regular expression. So and you can have different part captured now. And this is really just to capture anything behind the slash to do. But I could write out a regular expression and saying that I only want numbers there and I could go on and I could say I can have another slash and then another part of the path. So I can grab different parts. Anything you can do with a regular expression, knock yourself out. And the parts that you name, you can grab just out of here like that. So, well, yeah, depending on whether you like a dynamic or not, this is really cool. At least it's easy. I do like it for stuff like that, but I also sort of do like to move to keep the dynamic parts sort of right here in the handler, then take the things out and then sort of in business logic behind. I want to just be in the typed part of C sharp. That's a personal opinion. You can have another one, of course. So we will try again to delete, try, remove, and that takes an ID. So it should be fine. And, well, let's do it more or less as before. So if we aren't able to remove it, we will say that that's because it wasn't found. Not found. So that's a 404. And otherwise, we will return and, well, okay, I guess would be fine here. And go compile again. Let's see if that runs. And that, yeah. So we can inspect the to-dos. We can post new ones and we can delete them. I am going to skip number five here, which is doing the same thing, but for put. And it's just really a handler. Very similar to what we always see here. We'll just use data binding like up here to pull out the to-do of the request. And then I will try to store that and return static codes depending on what happened. We will just skip that because I think you get the picture. So the last thing actually we want to sort of do here is, well, we want to start using that Mongo database that I have back here. We haven't used that at all up till now because we are just running against a fake data store. So actually everything we have done up till now would work if I didn't have this class in the solution at all, my actual implementation of the data store. Because as long as I tell the tests, here is a fake implementation of the data store. Well, that's fine for Nancy. You don't need to have an actual implementation of the data store. So you can test drive for a while there and use that also to drive out the interface for your dependencies and then go implement them for real once you are all ready for that so you don't have to guess the interface. Anyway, we want to hook this up. So go to this test here. So this time around we are just first of all making sure that there's no sort of old data in our Mongo database so we can connect to the database that I have a local host at the default port and I just drop it. So I have some empty data to work on. And then I set up a browser again. This time I don't sort of inline do the configuration of it. This time I want this to be an integration test. So I want to run off the bootstrap that I'm going to create in a little bit and that is going to be the bootstrap that the application is actually going to run on. And then we just have a test that posts a to do as JSON. And then once it's finished doing that it does a get to the same place again and says oh I want it as JSON. It tries to deserialize the body of the response from all of this. So this is two calls changed together. So this is nice if you sort of want to test a scenario. Of course this can also get out of hand if you have really long scenarios so you have to use judgment there. You actually have to think even though it is easy. But this is fine just doing two first posting and then getting. And the response here is the response from the last one. That would get out to no big surprise. And then I just assert that I get back what I expected there. That I get back the one that I just posted. So and since I'm going to run on sort of the real bootstrap where there's an integration test that sees that I can store it back to the Mongo database and retrieve it from the Mongo database as well. So just sort of a smoke test that everything is connected correctly. So I don't have a bootstrap yet. And this is the part of the code where I'm going to cheat a little bit. So I'm just going to show all files for a little bit. And then I'm going to grab a bootstrap that I already have here. And we're going to look at that. So until now we've just used Nancy in the default configuration. We haven't even taken advantage of the fact that we have installed the ASP.NET hosting because we haven't run it on a web server yet. We've just run it from the tests. So but if you want to do, when you need to do to configure something for yourself, you do that in the bootstrap. You just have to have a class that implements the I-Nancy bootstrap, I believe, as the interface. And typically the easiest way to do that is actually inheriting from the default Nancy bootstrap. So up until now we would have just run actually off of the default Nancy bootstrap, which well, set up all the defaults. But I want to do a little bit more now. I want to do a little bit configuration of the IFC container. First I just call base. And one of the things that it will do up here is it will auto register everything that it can sort of find. So actually if our Mongo data store, so the implementation of the I data store, if that didn't take any arguments, if that was just something that was coming up, I wouldn't be able, I wouldn't have to do anything here. Nancy would be able to just pick up that I have an implementation of the I data store, which one of my modules needs, and it doesn't take any arguments for its constructor, so well, it's fine. We'll just new that up and assume that that is the, that is the dependency. But since I need to pass in a connection string, I sort of want to take more control. I probably don't want to hard code that either, but demo code. So I just knew up the one connection here to the data store and registered that with the container. And just to demonstrate, there are sort of various parts that you can get into here and set up all kinds of different things within Nancy. But oftentimes you don't actually have to do a whole lot here because Nancy is really good at auto discovering things. Like I said before, if you want to implement your own serializer, fine, just to let Nancy discover it by itself. But if for whatever reason you want to take control of how that's instantiated or something like that, again, you can do that in the bootstrap and you can set up the body deserializers that you want here. So very modular, very much on the conventional configuration side, but it does allow you to go in and change everything basically within the framework itself. Right. So now we have a bootstrap per. We have the Mongo database running. We go back to our test here. Let's see if it compiles at least, it seems to do. And then let's just run this integration just and we can maybe, oh, yeah, it's doing something with the Mongo database. You don't have to read that, but you can just see that something is going on at least and seems to be done. And it actually seems to have succeeded as well. So why don't we actually try to, well, let's compile so we should be able to, this is how the application looked before the session started. If you went to the last, we just had Nancy's very nice custom 404 page there. Oh, and you can see up here, it's a Nancy application. So that's, yeah, so anybody recognize her? It's Nancy Sinatra seen from the site. So let's see what happens if I go there again. So this is, well, you can't really see that this is, well, you can see that there actually. This is just exactly how this to do MVC would look running against anything, of course. You're just seeing the JavaScript here, which is why I'd rather see it from the test perspective because what we're doing here is an API. So this is not the interesting part really for this talk at least. But it should work. And you see there is the task number one there and that's actually something that's left over from some of our tests. Because at the beginning of our tests, we drop the database, but we don't drop it afterwards. So we have sort of extra data there and might actually be able to see that it was created somewhere here in the software. No, can't find it and it's not about Mongo. So that was pulled out of Mongo and we just sort of, well, finish, talk without. Yes. And there we go. And we can reload and it's still there. So it was back in the database there and we can do things to it. We're not in America, so I can write that. And we're just changing that back and forth. So we have some time left. I could do, well, I actually cheated there. Does anybody notice that? You don't have to do that. Exactly. Thank you. Yeah. So we have like 13 minutes left. So we can spend that on questions or you can go early to lunch or I can do the put implementation. So I'll talk to you guys. It's okay if you go for lunch. Yeah. So in the delete method, the delete method took an integer. But the URL was of course the string. So somewhere in there there is a conversion happening. Yes. True. Yeah. Yeah. And that's, yeah. So we could be explicit about that as well. Go and pass if you wanted to here. So that's what the parameters about the dynamic property you will do. So what happens if you send in something that's not an integer? It will blow up. In a good way or a bad way? Let's try. Well, it's probably just because actually we have an implicit cast down here. Come on. Tell me what the type of that is. Yeah. So that's the type of this is dynamic. So it's not too. So let's just go over to. Try to delete something. It's easiest to do from the test. It's this test. And here we go. And then we'll just delete. So that's not a number as far as I know. There we go. Yeah. So that, yeah. Good question in that sense actually. I said before that Nancy gives you some good error responses, at least in some cases. And I think it does. One thing to notice is that you, when you run from tests at least, you often, you have to scroll a little bit before you get the actual one. Because there's always the configurable bootstrap apart. So that is, so when I knew it up, the browser object that we're using, I had the with arrow, with, that creates a configurable bootstrap. And that's something that can be programmatically configured. So that's instead of my own real bootstrap. And that's all, that will always be on the top of the, of the error messages here. And then once you get past the own nose here is usually where you probably find some, something interesting. Input string was not in the correct format. So let's look at the stack trace, pass int, call site target dot dot dot. So let's see. So it seems to be line 23. That's trying to do the conversion. Yeah. So, sorry. So, so I think it's because try remove takes an integer and then C sharp. So it's the C sharp compiler really that's doing there. That's doing the conversion for you. I'm sure if you put the end pass up here, then you, the explosion would be here. So I guess this sort of demonstrates why I think it is nice to have the dynamic up here where you're pretty close to the HTTP request, which really is just a string. And it is sort of dynamic on the web, right? So it makes sense here. But once I move into sort of my back end business code stuff, I, I tend to like having types actually. Rather have a blow up early here than somewhere deep down, right? It's easier to identify the fault here. Yeah. Other questions? Yeah. Sorry. Well, the, well, if you're in lock, you do it with a new get. Yeah. Connected. Come on. So first we can just actually see the packages that we have installed. Come on. Get back. So these are the, the package that we are using right now. As you see, there's the Nancy package, the HP net hosting one and the testing one, as I mentioned at the start of the talk, we could install other Nancy packages. And there are a, there are quite a few Nancy dot and then tab. Yeah, there we go. And you have all sorts of different things that people have put in and for instance here you have, if you want to use auto fact, you basically just install that or an inject a structure map unity. Why? Windsor, whatever. But I mean, if you want to use other things, like if you want to use a simple injector, for instance, because that's fast. You can also set that up in your bootstrap because what these basically do is that they install a bootstrap for you that goes into Nancy and just says, don't use the container that you used to use this container. And it's, I'm sorry, I don't remember exactly how you, how you do that, but I think it's basically just one method that you need to implement in your, in your bootstrap. Maybe a couple. It's not too much code. And, and the least you can get in there, you, you, I mean, you could just create a project, install one of these and see how they do it. Have something to copy from. I think that should be fine. And it's actually, you're able to, to set it up enough, so to speak, that it's not, that you don't, you're not using two ISE containers at that point. Nancy internally also uses dependency injection for some of the things at least. And if you say, here's simple injector, it will use simple injector. It won't use tiny ISE and simple injector. So you don't get weirdness around that. Yeah, it's good enough. Yeah. What's your experience with the self hosting? Similar. Yeah, so the question is, what is my experience with the self hosting? How does it perform and how stable is it? I don't know how it performs. I must say. And I have, I only have fine experiences with it, stability wise. I haven't run it for long, long periods of time and under heavy load, the self hosting. So, so I don't really have a good answer for your question. I would say, so, so again, that's the split. That's Nancy and that's the, that's the hosting. I'm pretty confident that, that Nancy is fast, at least fast in the sense that I'm not. I'm not fast, at least fast in the sense that I think usually your application code is more likely to be the bottleneck than Nancy itself. And then specifically talking about the self host, then you install the self host there as well. And that's, I mean, that's basically just an ACP listener. So it shouldn't be too complicated either. So, I mean, probably fine. But yeah. So, I'm not sure if you can see this in mono and running in a Linux. Again, that is, that is possible. Everything in Nancy compiles on mono as well. I haven't tried running it there. So I don't really know what you, what you run into. I know people do it and some have deployed it to Heroku, for instance, as well as the cloud service. So, it's doable. I, yeah. So I don't really have a good answer for it. Yeah. I think. Authentication. Authentication. Yes. World domination. World domination, yeah. Oh, yeah. So you have, so again, you can install a NuGet package and get at least some forms of in there of authentication sort of out of the NuGet box. Or again, you can hook into Nancy and implement your own handlers for that. And again, that will be something that you have sort of on the side, an infrastructure part of your code. And then in your modules, you can indicate to Nancy that you want it to be a secure module. So, which I think makes sense because you'll probably often divide the modules up in such a way that there's an area that is, that is a secure part. Maybe it's everything. Maybe it's just one area. But I mean, why not factor it out so it's the whole module? That is secure. And then you, damn it, should be somewhere. There you go. And then that's it. Then depending on the authentication that you have set up, this will be, if you install the forms authentication, then there will be forms authentication. Or you can set up your own sort of key based or token based authentication if you were at Dominic's talk yesterday and heard that you shouldn't use forms authentication. So that's pretty simple and pretty basic, of course, as well. This is only authentication that isn't authorization. Yeah. Well, time for one more question. Documentation. Documentation. No, well, yeah, yeah, that's always a good question. So, well, you can just for yourself. There we go. Let's take a look at the Viki. And so this is the documentation here. And this is the part that you should be looking at over here. So, and there is actual text behind all of these. So, I'm pretty sure I use it and get back. So if we want to look at authentication. So here is the authentication overview. The forms, basic, stateless are the ones that you can just get. And what you do is, oh, there you go. So here's how, so here's stuff about how you do it yourself, securing your resources. You can also do that before and after hook or you can do as I showed it before. Ta-ta-ta requires authentication and ta-ta-ta-ta. I mean, it wasn't prepared that I went into the, exactly the authentication part. This is typical for the documentation. So I actually think it's a good level because you get the information, but it's not information overload and you don't have to read like hundreds of pages or everything. Again, and then if that's not quite enough, I also find it useful to actually go in here to the Nancy code and then you see there are also all these demo applications included in the source code. So you can just, again, look at authentication and then you have a developer application that uses basic here. Oh, so that's a secure module. Let's look at that. Oh, yeah, and it does this and probably has, yeah, it has a bootstrapper. Let's look at that as well. So this seems to be what you need to set up authentication. So that's a, and you're guaranteed that this is, this is up to date. The demos, they work, they run. Yeah. So time is up. I'll be around the rest of the day. I'll be around here. So if you have any more questions, please come talk to me. Thank you. Thank you.
Nancy is a lightweight .NET web framework. It provides an easy to use, to-the-point alternative to the most commonly used .NET web frameworks. Nancy does not try to be everything to everyone. But it does try to be the super-duper-happy path to web development on .NET and Mono. Come get introduced to Nancy, and judge for yourself, as I build a small Nancy application TDD-style and along the way demonstrate core Nancy features as well as a few slightly more advanced features such as content negotiation.
10.5446/51400 (DOI)
Okay. Yes. Okay. I think it's working. Good afternoon, everybody. And thank you for coming so late. I know I'm the only person now between you and the party, so I'll try to keep you awake. My name is Christophe Conrad. I work with Adobe. I'm based in Boston. I work with the PhoneGap team, and what I'd like to do with you today is to kind of take you through my experiences working with developers like you who build PhoneGap applications. So a lot of I'm going to show you a number of examples. So that's why I put my GitHub repository there. Sorry about the last name. That's the difficult part of this session. But most of the examples that you will see today are on that GitHub repository. On my blog, too, I blog a lot about PhoneGap, so you can find a lot of interesting information there as well. So what I decided to do was to try to do a top ten to keep it kind of lively. You know, top ten architectural principles. Some of them will be geeky, but a lot of them, as you will see, will be common sense. But for some reason, when we move to a new platform, sometimes we tend to forget what we learned in the previous language. And I'll try to remind you that you can actually apply some of the same principles when you build an application with PhoneGap. But before I do that, I always like to come back home with a little souvenir of my audience. So I need to know a little bit more about you. But the first thing I will do is this. I'm going to take a picture of you if that's okay. Let's see. All right. Let's do it. All right. And let's bring it back there. I think that's fine. I'll use Photoshop to add a few more people, but I think that will be fine. Which brings up a good question. What am I using here to do this presentation? What technology, what, you know, presentation technology am I using to do this? Am I using PowerPoint? Am I using Keynote? I'm using PhoneGap. Yes. So what you see here is actually a PhoneGap application. And I thought it would be useful to build it that way because then I will be able to show you stuff inside a presentation without doing the usual back and forth between my device and then the computer. So let's see if this works. Okay. So let's get started. So native applications are cool. And at Adobe, for instance, we build a lot of native applications. Like, for instance, if you use Photoshop on your device, this is not an HTML application. It's a native application. It makes a lot of sense because what we're doing in Photoshop is manipulating a crazy amount of pixels and it's very, very heavy. And native applications make total sense for that. However, we have also a lot of applications built with HTML because it makes sense when it's not crazy intensive like Photoshop, it makes sense to use an abstraction layer. And you all know that there is an abstraction layer that exists out there and that's essentially the WebStack. So HTML, JavaScript and CSS. And when you build with that, you can actually target all these different platforms. Some of them you may not be aware of. I finally found a bad developer the other day in Montreal. I was told that most of them are in Korea because it's a Samsung thing. But there are new platforms coming too. Like, for instance, the Mozilla phones with Firefox OS, Tizen, and there are, seems like every day there will be a new platform. And the nice thing about the WebStack is that you can build and deliver to all these platforms through the browser. Now, I don't know if any of you has ever tried to take index.html and submit it to the Apple App Store. They are not going to take that. So your delivery mechanism is really the browser. And that was really the initial idea behind PhoneGap. We said, okay, we love that platform so much and we are so close to being able to build apps with it that we're going to try to fill that gap if you want and to enable the development of apps. So something that you can actually deliver through the different app stores and things like that. So basically what we did initially was create a very thin, a very lightweight shell, a native shell that's simply wrapping your web app. So if you look at this, this is essentially 99% HTML, JavaScript, and CSS and 1% native. And the good news is even that native piece, we give it to you, that's essentially what PhoneGap is. It's a wrapper. But if you do that, your application may still not be interesting. And to be honest with you, Apple may still reject it. There is an item in the licensing agreement that if all your application is, is essentially a web app that you wrap, please deploy it through the browser. We are not going to take these applications. So they really want you to use the features that are available on your device. And so that's really the second component of PhoneGap. PhoneGap is a bridge. It gives you a consistent JavaScript API to all the cool features that you have on your phone, the GPS, the accelerometer, the camera, the contacts, you name it. And it's always the same JavaScript API. In other words, when you develop that application, you don't have to, you know, if iOS, then this JavaScript API, else if Android, then this JavaScript API, no, it's always the same thing. And we do the bindings if you do behind the scenes. So that's the second piece of PhoneGap. And if you know that, you know everything about PhoneGap. PhoneGap is nothing else, and that is by design. It's just a wrapper and a bridge, a wrapper for your web app. So now let's get into the real topic of this talk. Because it's a web app, a lot of people say, okay, I know how to develop web apps. We've been developing web apps for 15 years. I know how to do it. And then they get in trouble. And really what I want to do with you is to try to make sure that you don't get in trouble. So this is how people have been building web apps in the past. That's the old school architecture if you want. And you may recognize that you may have some version of that inside your company. That's a sitemap, right? And that's kind of your web app or your website. And behind each of these boxes, there is a little piece of PHP, JSP, ASP, Ruby on Rails or whatever. At the server side, that's going to generate a piece of UI, a piece of HTML that you then send back to the client and the client will render that HTML. So in other words, if I was to build, if that gets confusing, if I was to build a mobile application with that architecture, it would look something like this. So this is an employee directory application. You see that I work for an interesting company here. If I click one of these guys, it would do something similar to that. It would say, oh, okay, I need to display the details of that specific employee, Dwight. So I'm going to go ask my PHP server or ASP or whatever to give me that page. Is that good? Do you think you want to do that? And I'm saying that because I know you guys are super smart, but some people actually start this way. And then they say, hey, I can't use phone gap, it's too slow. But what happens here has nothing to do with phone gap, obviously. What happens is that you ask the server to generate your UI. And obviously, there is the network latency. There is, you know, if you're in the subway, it's going to take three seconds. On the web in a browser, people are used to wait two seconds. On the mobile app, not at all. They will totally trash your application if they have to wait half a second. So that is simply not acceptable. Okay? And the second thing that's probably not acceptable is that if you rely on the server to generate the UI, what do you lose that's expected from a mobile app? Offline, right? If you rely on a server and the server is not there, your app doesn't work at all. So even though you can build a phone gap application this way, the old way of, you know, web architecture, it's really, really, really not recommended to build it that way. And so the new school is to use what we call a single-page application. And a single-page application is essentially this. You have one and only one HTML page in your entire application. And then you will use JavaScript to create, to generate the UI at the client side, entirely in JavaScript. And you will create views in JavaScript at the client side that you will inject in that DOM. And when you don't need that view, you remove it from the DOM and you inject the next view. Okay? So you create the UI entirely at the client side in JavaScript. I'll get used to it. Okay. So some of the key differences here, obviously, compared to, you know, multi-page, you have only one page. The UI generation tier is client-side. It happens in JavaScript. It fully supports offline. You are fully in charge of the page transitions. If you ask the browser to go get the next page, the browser will be responsible for displaying it. And how will the browser do that? Boom, by simply replacing the existing page, forget the nice transitions that people expect. The major problem is that it's going to be slow. And another reason for that is that if on each page you load libraries and CSS files and stuff like that, it will have to be reloaded every single time. Obviously, in the single-page approach, because there is only one page, it's only reloaded once. So the number one principle is that even though for traditional web apps that you run, you know, on a traditional computer, single-page applications may or may not be a good fit for mobile applications, especially phone-gap apps, so not something that you run inside the browser, they are a must. You really need, if you want performance, you really need to do it this way. Okay? So the number one principle is use a single-page application. Now, there are many benefits, and we talked about them on the previous slide. There are also some downsides, because if you think about it the old way was really simple. Your unit of work was a page. And what can go wrong in a page? Can you have, let's say, memory leaks in a page? You know, the lifecycle of a page, you know, you displayed, the user is going to click something, and the page goes away. So the likelihood that you will experience a memory leak in that very small unit of work, it's very unlikely. However, with a single-page application, you never leave that page. It's always the same DOM. So now you really need to be careful about what you do, because I may actually spend eight hours in the same DOM, and remember, I will inject views, remove views from the DOM, so we'll speak about that in this presentation. All right, so how do you do it? How do you build a UI in JavaScript? Well, you know, it's HTML, so you know how to generate HTML. So if I need to create a view to display the details of an employee, I may do something like that. I create a string where I concatenate, markup, and then the model, you know, some data, and then I will render that. I will inject that fragment into the DOM. How do you guys like that? Beautiful code, easy to write, easy to maintain. You don't even have to use an HTML editor anymore, because it's totally useless, because of course, you can't get code hinting and stuff like that if you ever wanted to do that. You can't delegate the views to a designer, which these days is part of the workflow, because your designer is never going to write that code, obviously. So that is something that you shouldn't do. Instead, a better approach is to use templates. Now, I don't know if some of you are as old as I am and remember 16, 17 years ago how it started at the server side, it started the exact same way. I wrote that code in C and C++, and it was called CGI programming, and that's how you kind of generated dynamic web pages initially. And then people said, oh, that's crazy. You know, it's just not flexible enough, and we have to recompile all the time. Let's create templating engines or servers, and that's how PHP, JSP, Sqlfusion, all these solutions to that problem came about. Well we are solving the exact same problem here at the client side now. There are a lot of templating engines. And so the number two principle, the second rule, if you want, is that you absolutely need to do it that way. Otherwise, if you build anything more than a trivial application, you know, it's going to be a big mess. And so what should you use? There are many, many options, just like at the server side. Handlebars, JS, Mustache, JS, DustJS, if you use Backblown, we'll speak about frameworks in a second. Underscore.js is a kind of a multi-purpose library, but it also has support for templates. All right, so let's move on to this, which is something that I often see when I have to debug an existing PhoneGap application. I, you know, I try to debug it on my computer. And what's wrong here, oftentimes when I ask what's wrong, people tell me you have IE on your iPad that doesn't look right. So it's just, you know, to kind of give you the idea. What's wrong if this code is not minified is that there is a problem and the problem is happening online 3,000 something. And unless, of course, this code has been concatenated and minified, that indicates that your code is a big mess, okay? And I see that a lot because people start with JavaScript and, you know, the new JavaScript from the old days where you just kept adding functions to main.js. And that's essentially what happens at that point. So the third principle is really to think about it in a different way and very much like you talked about your code in another language and really provide structure to that code. And use some kind of an MV something, you know, architecture. And I put a star there because some people are religiously attached to MVC or MVVM. They are all different flavors of the core principle is that you should really partition your code. And that may be by creating model classes. And this is just, this is not like the way of doing it. You know, this is using a constructor pattern. You may want to use a module pattern. I don't really care at this point. I care that you partition your code. And let me actually go into some examples. I told you so, you know, some of the applications that I'm going to show you. So on the iPad, we're going to build something like this even though this is more of a phone layout. So this is the application that we are working on. And I want to show you some of the pieces. So first of all, principle number two was that, you know, you should work with templates. And so these are some examples of templates. The nice thing about templates is that at the end of the day, they are HTML. So you are back to, you know, HTML and then a bunch of placeholders. So like for instance, and this is using underscore JS. So that's the syntax. The most common syntax is the double curly. But underscore uses kind of a JSP syntax. And that's basically a placeholder simply saying, okay, at some point, I will merge my model into this and you will, you know, do the right thing. Okay. So now, so that's all of the templates. Now, let me show you some examples of views because it's all about, as I said, organizing your code here. So this is an example of a view. This is using actually backbone, but it doesn't really matter. The important thing is that we have a view here, a view object that we can instantiate if you want. And the way we render the view, so, you know, views typically will have a lifecycle and one of the methods of the lifecycle is going to be to render the view. And the interesting thing here is that you don't see any concatenated HTML code. The other thing we do is to actually invoke the template function and pass the model into it so that the template engine can do the merging like we expected. So you see that essentially I'll have, you know, this kind of object for each view in my application. And then I will also have models and in backbone, I created different models here. Models would be classes or objects that are responsible for managing the data. They don't know anything about the view. So that's, you know, a first rule of partitioning. And so that's kind of the way you start structuring your application. And we'll come back to that code in just one second. So you have models, you have views. And again, this is just sample code depending on whether you're using a framework or not, you know, it will be different. And then you can come up with your own approach. This is a controller and we'll speak about that one in a second. It's kind of a data adapter. I will speak about that specifically later in this presentation. Okay. So provide structure, use an MVC type of architecture. Now, before you go do that and try to reinvent the wheel, the fact is there are many, many, many smart people who have been thinking about this for a number of years now and came up with some frameworks. So before you create your own frameworks, I don't think the world needs, you know, additional JavaScript frameworks at this point, even though they keep coming. Consider using a framework. But consume responsibly, as always, because frameworks can be really easy. You know, there was one way of doing it with PhoneGap. PhoneGap is completely agnostic. As long as you have a web app, we'll happily package it and bridge it. We didn't want to say, hey, we would rather, you know, that you use that framework. That's really not what we wanted. But we saw a lot of people getting trouble. And so now I'm going to be just slightly a little bit more, you know, specific about what I think is good and what I think is bad. So the way we look at frameworks, and I don't think it's very controversial, is that there are really two categories of frameworks. On the left, you see what we call full stack frameworks. These are frameworks that try to solve all the problems in a web application. Some DOM manipulation, so how you access the different DOM elements to architectural concerns, so really provide that MVC type of structure. And then on top of that, make your application look good. So these frameworks typically come with a bunch of UI components, styles, themes, and that kind of stuff. So they cover the whole stack. In that category, you have Sencha, jQuery mobile, for instance, and Dojo, and they may be additional ones. Now, this is great because it's a one-stop shopping type of approach. You don't have to kind of make sure that all the pieces work together because hopefully, you know, the creators of these frameworks do that for you. The DOM side is obviously that it can be very heavy, and that in order to use one piece, you may have to embrace a lot of other things that you don't necessarily like. And a good example of that, there are a lot of people who start with phone gap using jQuery mobile. And I asked them, well, why did you choose jQuery mobile? And they said, well, it came with all these default components and the transitions and all that. I really needed that. So that's why I embraced jQuery mobile. Now, the problem, so basically, that person and there are many, was telling me, you know, I really liked what they have here in the UI layer, and therefore, I embraced everything else. Now, the DOM stuff is not a real problem. It's obviously jQuery core. But the real problem is the architectural stuff, which in jQuery mobile, you know, is you may like it and then it's fine, you can embrace it. But a lot of people are fighting it. And the only reason they keep fighting is to keep these UI components. So the other approach, if you are fighting it, don't use it essentially, and then build your own stack. And I have to say that these days, this is a more popular approach. And basically, in that case, you say, hey, for DOM manipulation, I'm not even going to use jQuery. Maybe I'll use Zepto because it's a clone in terms of syntax, but it's lighter weight because it doesn't care about old browsers. And when you build a map for mobile, you really don't care about old browsers. Or some people say, you know, jQuery was really helpful three years ago, but now you can pretty much do everything it does in plain JavaScript. So let's do that. Whatever seems right for you, you kind of pick your answer. Now, the big discussion is happening there in the middle in these micro architectural frameworks. You know, some people love backbone, some people love Angular, some people love Ember. And that's fine. You choose what you like because these frameworks are lightweight. I think certainly backbone is kind of the leaner, least directive type of framework. But even Angular, Ember, and Knockout are still reasonably sized. And you use that. And then on top of that, you may say, hey, I need to make my application look good. So I'm going to use a UI toolkit. Now, what you typically see in that category are toolkits, UI toolkits. I don't even call them frameworks because they don't do anything in that architectural box. They are typically 95% CSS. And if they are more than that, if they try to do stuff in the architectural layer, you shouldn't use them. Right? So you should really find one that focuses exclusively in that layer and that doesn't bleed on the other layers because that would mean it's not very well partitioned. Now, Twitter Bootstrap and Zurb Foundation are nice. They were not really created for mobile apps initially, even though they are adding components. Two good examples of UI toolkits would be Ratchet CSS and a new one that we are actually working on at Adobe called TopCode. So I'm going to show you TopCode. So this is basically why did we do that? Because one of the complaints that people had about PhoneGap was that it was too hard to build applications that looked good. And let's face it, initially, the initial adopters of PhoneGap were people like you and me, developers. And we are extremely good at writing code, but our skills are not necessarily great at coming up with great design and using the right gradient and the right text shadow and that kind of stuff. So we created apps that worked, but that looked terrible to kind of simplify. So people asked us, please, please, please help us there and give us some options. So this is really, really early and it's developed completely in the open. It's a GitHub project, an open source project. And it's 99.9% CSS. In fact, at this point, it's really 100% CSS. So you can still use the framework that you like and build the application the way you like. And we give you the themes and the skinning and stuff like that. So all these things, and that was the advantage of doing my presentation, really in a PhoneGap application, I can show you these components. There is a lot more to it. And you can go to topcode.io, which is the website, or you can go straight to the GitHub repository, get the code, and even, ideally, contribute to the code. Now, let me show you another example. So this one is actually Ratched CSS. Really, really simple and cool. And I like it a lot when I have to prototype, especially an iOS-looking type of application. Really consider Ratched CSS because, again, it focuses exclusively on that layer. It's very lightweight, very simple to use, and gives you a great look and feel out of the box. It looks a lot better, actually, on the phone because it was really built for that. The idea of Ratched is really iOS-specific. Topcode tries to be kind of a neutral theme. And if you follow the announcements on Monday, iOS 7 and all that, you will see that everybody is kind of going to a very flat, you know, Windows Phone-like, and even end the new versions of Android. And now iOS is very flat, no gradient, no kind of big Chrome stuff. It's really, you know, the content being king. And that's really the idea behind Topcode. All right. There's a little delay here. Okay. All right. So let's keep going in terms of architecture. And now I'll need your help because it's getting late for me, too. And I'll show you some code and you'll have to tell me what's wrong. Anybody wants to play that game with me? So this is something that I see a lot when I look at existing phone gap applications, and in fact, it doesn't have to be phone gap. I see this type of calls throughout the code. And think about maybe even the code that you write, hopefully not too much, but I see that really across the entire code base. And what's wrong with that? What's that? A bit chatty? No error and handling. You guys are really going into the details. But at a higher level, what is this? Well, because you may need data from the server. Yeah. Yeah. And we'll speak about that. Ideally, you'd love to never have to go to the server because it's going to be a lot faster. But sometimes, obviously, you need data from the server. But the question is interesting, though. This is obviously an Ajax call. And now throughout my application, I'm marrying myself to Ajax, right? And I'm saying, OK, well, so my data access strategy is Ajax. And it's really hard coded everywhere across many different components. Now, what if one day I decide that I don't want to do Ajax anymore, but I decide to be a lot more aggressive in terms of caching and try to go get data from a local database on the device as much as I can? Or what if I say, hey, isn't unit testing a good idea? And how granular can a unit test be if I have a server dependency? Does that really make sense? So this is not really good to be so specific about your data access strategy. So principle number five is that you really shouldn't do that. And you should abstract your data access strategy. Now, this is one of the common sense stuff that I told you about at the beginning of my talk, of course. And you have been doing that in other languages forever. But for some reason, when people go to PhoneGap, they are, Ajax, let's do it. Let's go. And you see that code all over the places. So here is what the same code roughly, and we may add, you know, error handling and that kind of stuff. But the same code at a high level should look something like this. Because if you read this code, you clearly have no idea at the end of the day, you know, how the data will be retrieved and from where? Okay, this is very generic. You have a data adapter that's going to do the trick behind the scenes. And basically the way you do that is by implementing a common interface. And you have then these pluggable adapters that implement the exact same interface. They all implement the same interface. And typically, I start, I always start with an in-memory adapter. Because that lets me really iterate super fast. What you should never do is wait for the server-side developer to, you know, build your big stack of RESTful services because then you kind of tie yourself to these services. I like to be very creative initially and say, okay, you know, let me come up with fake data because then I can really build the application that I have in mind and then we'll solve that problem to get to the data. Now, I know that it's a little bit backwards compared to what we've been doing, you know, the last 15 years where we started at the server-side. But now, people understand the value of a good user interface or user experience. So I typically start with that. I have a WebSQL adapter because that's still what iOS supports more than index DB. I may have a local storage adapter. And of course, I may have an Ajax adapter. And the benefit of this, for instance, if I say, you know what, the data that I used to get through Ajax, now I really want to get them from my local database, I can just unplug one adapter, plug the other one, and I'm done. Okay? So let me show you a few examples of that. So I'm going to start with those ones here. Again, there are different ways to implement them. I'm going to start with a very simple one. Again, implement it as an object, as a constructor, essentially. But so that's to get, you know, data about an employee. And so I called it JSON adapter. And basically, it defines an API. There is an initialized method in case I need to. If it was a database, maybe I need to open the database or do something of that nature. But most importantly, the real API is defined by ID, defined by name, and you go on, find no, find by manager, et cetera, et cetera. Now, the implementation of that here, of that adapter, is to indeed make an Ajax call. Okay? So I don't make my code a lot more complicated. In fact, I think it's cleaner. But the benefit is that now, if these services are not ready, I can go to this guy. And start with that. So this is my memory adapter. And you see how simple that is. I create dummy data, in this case, in an array. And then I implement the exact same interface. Okay? And you can go on and on and on. So I have here a WebSQL adapter. And you recognize your old friend SQL there. But again, inside the exact same API. Okay? Find by ID, find by name. And in there, I do some SQL. Okay? And now I can really plug and play these different adapters. Now, let's go back to the memory adapter. So it seems simple enough. It's even simpler than in other languages because you don't have to formally define an interface and all that. But, well, actually, let's go back to the JSON. Obviously, the methods have to be to have the same name. But that's not enough because what is an Ajax call returning? Obviously, I'm using jQuery here to make my Ajax call. What does it return? What do I get back from an Ajax call? A promise. So essentially, an Ajax call is asynchronous. So to make the plug and play thing work, not only do your methods have to have the same name, but they have to be invoked the same way. Because if one, you know, define by ID in my Ajax adapter returns a promise, so is asynchronous, essentially. And my in-memory adapter is synchronous, then it's not going to be plug and play, right? Because I'm going to have to invoke the methods in different ways. Which is why you notice that when I look at my in-memory adapter, I create, this is the jQuery implementation of that, I create a deferred object that I resolve immediately, okay? But the call is still going to be asynchronous. And indeed, the method returns a promise, the promise of that deferred object. And now, even though this is clearly, this can clearly work synchronously because all the operations are synchronous, but I provide that object with an asynchronous interface so it matches, you know, my Ajax adapter, which obviously is going to be important. So that's the little trick. You need to make sure that, you know, not only they have the same name, but they work the same way, they are all asynchronous. Okay. This one is really interesting, too, and I see a lot of issues with that, too. So abstract device features, which also means keep your application browser-runnable. So what do I mean by that? Okay, let's take an example. So there is a button there that says show notifications. So let's press that button. Let's press that button. Okay. How do you like this? You like it? Because I'm nice. I agree to you. But what can you immediately say about this application? And I, it's not a big secret because I told you before, but what can you say about this application? It's HTML. You love the body of that message, but you don't really like the header because if you use a JavaScript alert, like many people do when they develop a phone-gap application, they will see index.html. And you just gave away the fact that your application is not native and that's confusing for users. You know, casual users, they'll say, what is that thing? I'm not in a browser. Okay. So you're going to say, okay, that's easy to solve because as I said, phone-gap gives you these bridging APIs. Okay. And one of the bridging APIs you can see there is basically a bridge to a native alert. Okay. So let's try that. Phew. Okay. It looks much better because now that looks, you know, you can put whatever you want in the title in the body and even on the button, I can now relax because it looks great. But if I use that now, the problem is that it doesn't run in the browser. And my rule was keep your application browser runnable. And the main reason for that is because you want to be able to debug it easily. Believe me, debugging on this, even though it's getting better, is not fun. Debugging on this using, you know, for instance, the Chrome developer tools, really good tools. So if I can keep debugging my application here, that will make me a lot happier. So don't just replace all your JavaScript alerts with that because that's one easy way to totally break your app inside the browser. So now you know the approach. We are going to do the exact same thing we did for data access. You're going to abstract it one way or another. You're going to have some kind of a notification interface. Okay. So kind of the exact same way if you want. And so this is not the only thing that you need to abstract. So we spoke about notification. We spoke about storage with the data adapters, the sensors. So sensors are everything essentially that you can do with that thing, like access the GPS, the accelerometer, the compass. So these are all sensors that are available here, but not necessarily here. It's coming, but right now a lot of them are still not available here. So just don't just assume that you can invoke these things because again you break the rule, keep your application browser runnable and debugging is not going to work anymore. Now the last one is so interesting that I'm going to spend one specific item on that. You need to abstract the user interactions with your application because obviously if I work on this, this is mostly going to be through touch. If I work on this, at least the machine that I have, it's mostly going to be through the mouse and the keyboard. So principle number seven is that you really, really need to handle touch. Just don't ignore it like I see so many people do. So let me show you one way people ignore touch and simply do mouse because guess what? For developers who are not interested in the finest experience, mouse events actually work as touch events. In other words, if you use a click event and then tap with your finger, it's going to work. Not well, but it's going to work. And I see probably in over 50% of applications that I look at, people don't bother. They use these mouse events. And so what happens, for instance, so here is an example of what happens. So I have a list of items that I can select. I can tap. So let's do that. I'm going to select one. See the problem? So let's do it again. Before the event even reaches the application, it takes over 300 milliseconds. That's 300 milliseconds that I have no control over. My application has no control over. This is the OS. And why? Because the OS is wondering if you're going to mean double click. So it says, if after 300 milliseconds you didn't tap a second time, I'm pretty sure it's a click unless you're really, really slow. So at that point, I'm going to send a click event to your application. Now you may say, come on, 300 milliseconds. Who cares? People care a lot. On mobile, if it's not immediate, people reject your app and will say, hey, this doesn't feel native. It doesn't have to be that way. A lot of people say, look, phone, it's slow. Look. It doesn't react. That's what people who don't like hybrid applications, they will enjoy doing that kind of stuff to prove you wrong. But it doesn't have to be that way. So let me show you the exact same list. And I'm going to do this. Now you see basically what's happening. I'm using touch start here, which is triggered right away, two, three, four, or five milliseconds. Whereas my click event is still struggling because it's waiting to be triggered by the operating system wondering if it's a double click or not. So you say, great. That seems pretty simple. I'm just going to do a global fine and replace of all my click events by touch start. Now what happens if you do that, you break my other rule, which is that nothing is going to work here now. And I wish it could be as simple as that, but it's not only about tap events. Sometimes it's not only about touch start. Sometimes you have composite events like, okay, I want to swipe. And so if you just use touch start throughout, you're going to be in trouble and you're going to break a bunch of stuff. So basically what you have to do is say, okay, well, you do feature detection. You say, well, if touch start is available, I'm going to use that. If not, I'm assuming that you use some machine like this. And I'm going to register a click event. So that's pretty messy code, right? To kind of have to do feature detection at that level. So before you write that code, again, there are libraries that will do that for you automatically. And a really good one is called Fast Click. And basically, Fast Click is a drop-in library. You basically just add the script tag and magically it's going to do exactly what I said. So you keep coding essentially your click events, but Fast Click is going to hijack your click event and see, okay, is touch available on this device? If yes, I'm going to replace your click event by a touch event. Okay? So it's not the only one, but again, that's actually a pretty good option. Yeah. Well, so Fast Click doesn't, you know, does the right thing. Swipe is still going to work. In fact, I'm using Fast Click in this application and you see that I can still swipe. So that's the beauty of it, because handling that code can be really complex. So before you venture into that, you know, problem, look at Fast Click. Another good one is called Hammer.js. Also handles all the touch events really nicely. So these are libraries that have been tested in the field and we know they work. Okay. So number eight, we're getting there. Another reason a phone gap application may not deliver the best performance is when you don't use hardware acceleration. Now, let me kind of a mental break here. Show you one example and this one I will show it to you back later. So this is a prototype of a game. Okay. So you don't see anything. Okay. Oh, yes. Okay. So there is a, you know, you don't have the best experience because I'm streaming my screen, but hopefully you can see that this guy is running pretty fast, but it's super smooth on my screen here. You can come and check at the end of the session. So you need to use hardware acceleration. The place you typically see it immediately is when people, you know, slide pages and it looks jerky. It doesn't look like the smooth native experience. You need to hardware accelerate these transitions. Don't use jQuery transitions to do this. Don't do these transitions in JavaScript. Essentially, don't do them on the CPU. You need to execute these transitions on the GPU. Okay. So here is a typical setup. Don't worry. The screens are not going to bleed out of your screen. This is just my way of trying to represent that. A typical setup is to basically, when you're going to slide a view from the right to the center, you position it immediately to the right of the viewport of your WebView. Okay. And there is a, you define a CSS class that typically I call right to say, okay, this one is to the right of the viewport. The other one has the class left which positions it to the left of the viewport. And essentially, if I want to move the blue one to the center, I will press that button here. And you see it was nicely animated. The only thing I did was to essentially change the class name to center. If I want to put it back, to put it back to the right, I do that. If I want to move that one. And you see, I hope we don't lose too much, I'm sorry, I hope we don't lose too much of responsiveness through the streaming. It's pretty snappy. Okay. So how does it work and how do you do it right? So typically, people will start this way. They will say, okay, I got it, you know, to the right, I'm going to position it. I'm going to put the position, the left style to 100%. 100% is essentially the width of the viewport. So if you put the left attribute 100%, it's going to put it right to the left of the viewport. Right is going to be, sorry, left is going to be minus 100%. And that's how you do it. The problem if you do it this way, it's going to work just fine. Okay. This is going to, when you change the class of a div, it's going to put it to the right position. And because you also add the transition class, that thing is going to be animated. So it's good because this is already CSS animation as opposed to JavaScript animation. That is a progress. But this is not hardware accelerated. It's going to work. And depending on your hardware, it may be pretty jerky. So here is the right way of doing it to get hardware acceleration. You need to use a transform. And so it's essentially very much the same thing. But instead of setting the left attribute, you transform the x coordinates. So the first parameter of translate 3D is actually the x coordinate. And then the second is y and the third is z. Okay. So when you do that, using translate 3D, automatically will execute that transition on the GPU. Okay. The previous one is still doing it on the CPU. So very small technique, but that is what's going to make it faster. That's going to work on the desktop as well. So this was so common of a problem that I decided to write a little library, a kind of a micro library that does just that. And so if you want to get it, it's on my GitHub repository. It's called page slider. And it's very, very tiny. But it does exactly what I just told you in an efficient hardware accelerated way. No, for that you would probably have to use a polyfill and do some feature detection. I'm, well, on Android, you don't really get hardware acceleration in the same way. So certainly on older version of Android, it's not going to be hardware accelerated. But you still have to optimize it where you can. And certainly on iOS, that's going to make a big difference. It will work. It will work on Android. It will just not accelerate it. It's still going to be on the CPU. All right. Now, we are getting into the subtle things, but still we need to be really perfectionists if you want. So hide HTML behaviors in your application. And we spoke about one which was the alert. So that's an HTML behavior. But there are more subtle ones. Like this one. Have you ever tapped or pressed a link for a long time? Let's try. Okay. And then you get that. That's the default iOS behavior when you long tap a link. Hey, you seem interested in that link. Would you like to copy it? You want to open it? No. No, no. For me, this is not a link. I happen to implement this as a link. But for me, this is a list item. In an app, that's a list item. So please don't do that. Okay. Because again, if someone sees that, first of all, that's super confusing. Because that's really not what I expected. But again, you gave away that your application was not native. So it doesn't have to be that way. A simple style that you define on the anchor element will get rid of that. So now I can do this as long as I want. And it never, that little pop-up is never going to show up. Now, you still see a little thing that doesn't look right. That highlight is there. It's not something that I wanted. It's by default. You see, when I tap, you get that highlight. And you don't want that. Maybe you want your own, but you don't want that one. So there is another style that you can apply. And by the way, I will make all these slides available because I see a lot of people taking pictures, which is great. But if you go to my blog, you'll be able to go to slash NDC.PDF. And I'll put all that in a PDF file for you. NDC.PDF, yeah. So now it's all gone. Okay. Now I can do what I really wanted to do, but I don't have these HTML behaviors. Great. Now, we have just a couple of minutes for the fun part of my presentation where I will really ask you. So we covered a number of performance techniques already, you know, hardware acceleration, the click, the 300 millisecond delay that you really want to avoid. But there are a couple of typical problems that we have seen in applications. And I'll let you guess what they are. So what's wrong with this code? So I need to display a list of states in the United States. Now, you may not be familiar with that, but states in the United States don't change every day. There are 50 of them. It has been like that for some time, I think. So what's wrong with it based on that helpful piece of information? It's static data. You should never make that call to the server. I mean, it's already bad, you know, but on a mobile device, that's criminal. Okay. You should never make that call to the server. Yet, I see tons of stuff like you have a full-blown SQL database available to you on this. Okay. Or maybe if it's not that bit, it can go in local storage or whatever you want. This is another problem that I see all the time. And if you add them up, that is what creates slow applications. What's wrong with this one? Exactly. So if you didn't know it, this is essentially a query that I'm executing on the DOM. And it looks like I'm trying to style and configure a bug button in my header. So I'm, you know, adding a click handler. I changed the color, changed the text decoration, and I set the href link. So I do four operations on that same button. But every single time I query the DOM, it's like if every single time, you know, you wanted to read a field of an employee, you would reexecute the same select statement. It's that bad. Okay. So obviously, the right answer is to do something like that. You execute the query only once. You put it in a variable. You can use the dollar sign as a convention to remind you that it was a query on the DOM. But then you use that cached variable to do your business on that, you know, on that button. This one is a little more subtle, but it's even worse. So this is a view, some kind of a view, that I will instantiate multiple times. So I will create multiple objects because it's an employee view. Every time I need to display an employee, I will display, I will instantiate that view. I see that all the time as well. This is really bad. It has a big impact on performance. What's wrong with this one? Exactly. I have that template there. It's a handlebars template. And it doesn't matter which technology you use, with templating technology. In the initialize of the employee view, so that's going to be executed every single time, I recompile the template. The process of compiling a template is typically to create a function that you can pass an argument to do the model merging. There is absolutely no reason for that. You should compile it once and not every single time you create an instance. So when people start using templates, I see that a lot and that's not good. So a solution to that is maybe to pass the, to inject if you want, that's a kind of dependency injection, to inject, I don't know how to say it, in another way, to inject the compile template into the view and you're all set. You don't recompile the template all the time. This one. So I have a screen there with a bunch of icons. And I displayed these different icons. You see these are different images. See that all the time too. Exactly. So this is what? Seven images. In other words, seven HTTP requests to actually load these images. That's also kind of criminal, right? It shouldn't be that way. So what's the solution to use a sprite, which essentially is one big image with all your icons in this case, right? And you use that image as the background. You create a little viewport, you know, maybe 18 by 18, which is the size of an icon. And then you offset the background to display the icon that you really want to display. Makes a big, big difference. In fact, that application that I was showing you before, and again, I hope that I get that guy to run fast for you, come on. Okay. That's not bad. This is using a CSS sprite. This game is built using DOM. It's not Canvas at all. It's entirely DOM. And so basically, that's an image. Now, try to do the same thing by swapping the image. That guy is going to, you know, take his time. The only way to make him run fast, and again, you need to see it on my iPad, is to use a CSS sprite and to offset the background. That's the only way you can get to that level of performance. We are almost done. That one, I also like a lot because this is something you don't really think about during development, but that can become a big problem in production. So I need to display an employee. So in this case, again, I have no choice. The data is on the server. So I need to go to the server, get the data. And then, of course, it's an asynchronous call. So when I get the product back, it's a product in this case, then I display the view. I display the product. So imagine I have a list of product. I click one, and then that code runs. It's very possible that for two or three seconds, nothing is going to happen. So I'm going to tap that thing and like, what's going on? I don't see anything. Why? Because I'm not changing the view. I'm not displaying the view until I get the data back. So the rule is don't wait for the data to display the view because if you wait for the data, you know, people can wait for data. If you display the right, you know, visual queue if you want, that's fine. But people don't wait for the UI. As soon as you tap, you should move to the product detail view, and then it's empty initially, and then you update the view. So you do something. It's here, but, okay, whoops. Okay, here we go. You display the view, you make the request, you update the view. And the very last one, and probably the most important one because that's one that people don't think about and has very dramatic impact on the performance. By the way, most of these techniques are not specific to phone gap applications. It's basically web performance techniques at this point. But this one is very, very, very important. What's wrong with that code? Yeah, so it looks like I need to display a list. How many times am I going to relay out that document just to display that list? I'm going to do 100 relay outs on that document, which is crazy because relay out is probably one of the slowest operations performed by a browser. And here, for absolutely no reason, I'm doing that 100 times because obviously the only thing the user is interested in is the final list, right? And that's still the only thing the user is actually going to see. Okay, so what's the solution to this? Yeah, exactly. And again, think about it. So this is one simple way of solving that problem. You build a string, and then at the end, one DOM operation, so one relay out versus a 100 relay out. You would be amazed if you go through your code at the difference that this is going to make to your application. And that's really what I wanted to tell you. So let's quickly summarize. Use single page applications, use templates, handle bar JS is a really good one if you don't know what to use. Use some kind of MVC architecture framework. I told you my personal preference for build your own stack as opposed to the full stack frameworks, even though they have a place to abstract data access, abstract device features, specifically touch events. Don't get that 300 milliseconds delay, please. I don't want to see those hardware accelerations, these subtle HTML behaviors, and then architect for performance. That's all the time we have. I'll be around if you have additional questions. Thank you very much.
Tired of Hello World? In this session, we explore best practices to build real-world PhoneGap applications. We investigate the Single Page Architecture, HTML templates, effective Touch events, performance techniques, and more. This session is a must If you plan to build a PhoneGap application that has more than a couple of screens.
10.5446/51401 (DOI)
Okay, thanks for coming everyone. My name is Craig from Xamarin and we're going to talk a bit about developing apps for Android with C-Sharp using the Xamarin platform. So why Xamarin.Android? Why are you here listening to me this afternoon? Because I'm going to tell you how you can write C-Sharp apps for Android phones and tablets using the language and the tools and framework that you already know if you're a Windows developer. Also, that C-Sharp code that you write for your Xamarin.Android application can be reused for an iOS application, for a Windows phone application or for Windows Store apps. And also you can bring in code that you already have from Enterprise apps or from one of those platforms and use it in your Xamarin.Android project. You will be creating native Android user interfaces because Xamarin.Android uses the native widgets for the applications that you build. And because the application itself is running on top of a native implementation of the.NET framework, you're getting native performance. There's no translation, there's no cross-compiling, it's not running as Java under the covers, it's running as a.NET framework on top of the underlying Android operating system talking to the hardware. And if you really need to, you can access Java libraries as well. So if there's a third-party control widget or a third-party Java library or some existing Java code that you have, we also provide the tools that let you wrap that up in a C-Sharp API and include it in Xamarin.Android projects as well. So how does it work? Underneath the covers is mono, an open source implementation of the C-Sharp language and the.NET framework that was originally started to enable basically a programming environment and Windows Forms clone on Linux back in 2001, soon after Microsoft announced that they were going to submit it as an open standard. And the project itself didn't get released officially until 2004. And I'm pretty sure at that time no one realized that in less than 10 years we would have the computing power we do on our phones and that they would also be running the same sort of kernel enabling mono to be put on those phones and run C-Sharp on hundreds of millions of devices. So mono is.NET, it's a.NET you know, clean room implementation, but all of the same classes and frameworks that you're familiar with. To facilitate building Android applications in C-Sharp, we also need to give you access to the underlying Java widgets and the Android platform as well. So we expose those as bindings or projections to use Microsoft terminology from the Windows Store slash Windows phone 8 side, which basically exposes any Java object or class or whatever under the covers as something that looks and behaves like C-Sharp. So you can subclass, you know, you can new it up and implement anything that exists in the Android Java SDK from within the C-Sharp environment. So tying all that together is a compiler. And we're doing something on Android that's very similar to what you're expecting to happen on the desktop environment. So our compiler takes your C-Sharp code and whatever else that you've built into your application and produces IL, just the same as a desktop application for.NET framework exists as a set of IL. And we package that IL up with a native Android runtime. So it's the.NET framework that you have installed on your PC, but it's mono. We shrink it down by linking out all of the things you're not using. So strongly type language, we can do a map and we can trim out all of the classes and methods that aren't being accessed. So it's the.NET framework itself, linked down to the subset you need, packaged with the IL on top and put into an APK to run on Android. Visually, it looks kind of like this. So you've got your C-Sharp and the.NET APIs that you're using. We compile that to.NET IL. We package it into an APK, as I said, with the.NET framework, with the mono framework linked out to the smallest possible subset that you need, and then we ship that off to the Android device. Now on the Android device, the IL is running on top of the native.NET framework implementation, which is running on top of the operating system. In that stack, if you can visualize it, there's not really any Java involvement. So all of your math code, all of your business logic, anything that's pure C-Sharp that you've written for your Android application is running on top of the native framework, which is running on top of the operating system. Where the bindings fit in and where you're accessing Android widgets and user interface, it's a call across to Dalvik. Dalvik's sitting on the operating system as well, and Dalvik's sending screens and data to the user. But we sit side by side with that. The mono runtime and the Dalvik runtime coexist. They both have direct hardware and operating system access underneath them, and they're both effectively able to communicate to the user. But there's no performance penalty, there's no layers of indirection. It's running side by side. The key, I guess, issue that we have in terms of that interoperability is we're managing our garbage collector, and we're trying to manage Dalvik's garbage collector in a sense because we're creating and destroying objects on our behalf because we've subclassed them, we've implemented a wrapper for them in the C-sharp side. But it's a side by side, we don't sit on top of Java at all. So how do you get that working on your local machine? We have a simple unified installer. You only have to go to xamarin.com slash download and start. We'll pull down everything that we can for you and install it onto your local machine, whether it's a Mac or a PC, including the Android SDK and all of our tools that will enable you to get up and running for free, basically. We have a trial version that lets you start developing for free, all of Android's tools are free, and you can side load Android apps no problem so that you can distribute to your friends or within an enterprise without really any involvement from Google at all. If you want to deploy to Google Play, you can play to Google.com slash app slash publish. It's $25 a year, I think. And the application that the APK file that xamarin.android generates, uploads fine to Google Play and works on any devices that you can name. So development environment support once you have installed xamarin is the third line there. So on OSX, you can use our own development environment xamarin studio. On Windows, you have a choice. For Android development, you can use xamarin studio or if you have Visual Studio and the xamarin business edition license, you can work within Visual Studio to build Android applications, to build iOS applications, as well as your Windows phone and Windows store, et cetera. So you can choose to use our free IDE or if you already paid for Visual Studio, you can use that as well. This is what the two ID's look like. You'll be very familiar with if you're a Visual Studio user, xamarin studio won't take long to get used to. The solution structure, all of the file formats are the same. So you can open projects interchangeably in the two platforms. The ID feels the same. It's got IntelliSense, AutoComplete. It's visually similar. So really the only choice is have you paid for Visual Studio already? And do you need Resharper? In both Windows and on the Mac, you have a graphic or UI designer. So Android has its own XML format which is used to create screens. We've built our own designer that lets you do it graphically so you can drag and drop, edit properties, the same way you're familiar with dragging and dropping for xamarin blend or Windows forms way back when. And xamarin studio and our Visual Studio plugin also implements all of the customization, all of the property pains that you need to build an Android app. So if you opened an Android app in Eclipse, there's a few different things that you need to set to make things work. We expose those as a user-friendly API as a user-friendly user interface as well so that you can check a few boxes to get the configuration that you would like for your Android application. So there's a couple here that I want to highlight. We'll go from this one first. So you can choose the CPU architecture that you want to target. So you've got ARM or ARM v7 depending on the devices that you think users will have. And there's also an option there for x86. It might not be a platform that you're targeting for deployment, but there's an x86 emulator that's available for development. And if you're, you basically want the x86 emulator if you're doing development for Android because the ARM one can be slow and painful to work with. So you need to know about those settings so that you can enable x86 to get that emulator working. I mentioned linking earlier. So linking is a great feature of Xamarin.Android because it gives you a small APK to deploy to your users. The downside if you want a really fast compile test debug cycle is that it takes time to do that map of your code and to link out the stuff that you don't need. So we have an option that lets you turn linking on and off to speed up your development time. So you can change that setting. It's just important to remember to turn linking on for your production deployments to keep the APK you distribute to your users as small as possible. And finally, I want to point out the setting over here, use shared monoruntime. So this is a feature that we've enabled specifically again to help the compile test debug cycle be really fast. Because of how I explained earlier, on Android devices we put the monoruntime on there and have your IELS sitting on top. The immediate question might come to your mind is, well, on a PC there's only one instance of the.NET runtime and apps are just IEL. They can be really tiny and they all share it. That's possible for us to do on Android. So we can install the shared runtime like it's on a PC, one copy, and you can have 100 Xamarin.Android applications which adjust the application IEL. Maybe 20K, maybe 40K, maybe something really simple. And so we enable that ability for debugging because copying up a meg or two meg of runtime for every compile test debug cycle, you know, it's probably slowing you down unnecessarily. But we do not enable that for deployment to production. We don't want to be the gatekeepers of multiple or incompatible copies of a.NET framework on a user who's got Xamarin.Android apps from different places. So shared runtime is there and if you're using our product you'll notice it and you might think to yourself, why can't I just have one shared for users from our users? And the answer is basically that we want to protect them from themselves by having each application only ship with the runtime that it was developed on. So keep that in mind if you're using the product. And the other thing of course is Android manifest XML. So if you're an Android developer already and you've used Eclipse, you'll notice that there's very similar kind of options for speedily entering the most common things that people add to their Android manifest file which is loaded up by the operating system to determine certain things about the application. Most importantly the version and the required permission set. So Android has that concept of sandbox that your app runs in and can only access telephony or network or other things if you request it and then the user agrees when they install it. So we provide those settings in an accessible way just the same way as Eclipse or other IntelliJ provide for their Android application settings. The one thing that you won't see Android manifest happen do in Xamarin.Android for those of you that are familiar with Android development is you don't need to declare every activity manually in your Xamarin, in your Android manifest. And I'll show you when we look at some code how we achieve that using attributes in C sharp to have classes effectively put up their hand for participation as an activity in Android manifest rather than having to keep it in sync manually. We also have deployment options so that you can easily publish an Android application. We have a wizard that lets you find a key store and sign the app and compress the what is it called binary alignment and output an APK that's ready to either distribute to users or to upload to play. So that's just a quick overview of Xamarin.Android. Probably the easiest way to understand it is to see some code being written poorly. So we're going to build this app. If anyone's already familiar with Xamarin tools, you've probably seen our favorite to do list application as a sample already. It's useful because it's a really small application and can write in under 100 lines of code, but it utilizes a lot of different things about Android lists and navigation that help to show the platform off. Let's go over to going to use Xamarin Studio. I've got a Mac here obviously. So new up and Android application. Android to do. And the first thing you'll notice is we try and provide you with something that's going to work out of the box. So if you run this application up right now, it's you can see there's like a button that counts how many times it's clicked. It's the classic hello world working application. So it's given us a main activity. If you're not familiar with Android, an activity is kind of like a screen, it's kind of like a view controller. It is a class that is normally associated with a layout. So the activity class is where we write our code. And there's the concept of a layout, which is the XML that describes what widgets are on it that we reference and then connect to work with those elements. So let me go through where those things live. Resources folder is an Android standard location for certain sets of application parts, I guess. So drawable is a subfolder of that. It's always called drawable. And it always contains images or XML files that describe the way images should be used. So we've got an icon there. And we can just maybe add a new icon. Oops, target. And we'll copy that in there. So icon. The layout folder is where the XML lives that describes your screen. So if I open this main XML file, you'll see as I mentioned we've got our own custom designer, which lets you view source and work with the visual design. And a string's file, which is a lookup for string values that help you with managing them and also with localization, I guess. So I'm just going to start customizing this. So we have an add task button. And let's look down. We need a list view to display the list of tasks as per the mockup that I showed you guys a second ago. So we drag that on there. It's kind of non-visual. But if we look in the source, we can see the XML. So this is Android's native XML format. There's nothing really to do with Xamarin except that we provide the editor and let you work with it. So I'm going to just type in the IDs here because I can. We could also have done that in the layout editor. And you can see linear layout, if you're.NET person, that's like a stack panel. And we've created widgets in there with some layout metrics and some Android syntax there at plus ID slash and the name of the control. So just get used to seeing at plus ID slash and stuff after it is the stuff that you care about. Because we're in the process of building those two screens for the task list, I'm just going to rename this task list and add a new layout and call it task view. So let's just see what's there. It's given us a linear layout to start with. It's kind of default. And we want to add, let's say, some text. So our task is going to have some text, a checkbox, is it done or not, and a button. So let's just edit that to save. Let's just edit that to done. And yeah, we want to give these useful names. So rather than going right into the XML, let's just use the property pane over here to set those values, title text, done checkbox, or typo, and save button. So there's a lot of other properties that you can set for all of these controls. It's dependent obviously on the type of control. If we pop into the source now, it's keeping that in sync for us. The designer also lets you try out what the layout you're working on might look like on other devices. So I've picked the default Nexus one, but if we had a really tiny screen, it would look really tiny. And if we put it on a tablet, it would look kind of ridiculous. It's slightly more powerful than that though, because you can tick these locks and say, I want to keep this layout on this form factor, and then change the form factor and start making changes. And that's a feature of Android that Google's enabled to help with the fragmentation of screen sizes and device capabilities. So what will happen is a separate layout folder gets created for each different fragmentation vector, if you like, and it'll save a different copy of this layout with the various different changes that you've made. So if I went to the tablet, maybe rather than having it full width, I just push everything over to the left, for example. The control names remain the same, and when it's loaded on the device, Android takes care of loading the correct layout for me. I just have to make sure that I can manage all the control names that they all exist in all the various layouts that might be in use. But that stuff is all supported by a designer and helps you, gives you a big helping hand when you're trying to deal with devices, multiple screen sizes particularly. So we've just drawn these two layouts, named the controls. First thing you'll notice now is the layout reference in the code is broken. So I renamed main to task list. So let's just change that and fix it up. We don't want that button handler. I might just rename this as well to task list so it's simple. And while we're here, let's look at this activity attribute. If you're familiar with Android already, the Android manifest XML is a way or central location to define parts of the application that can be used. And one of those things is an entry in that XML file for every activity that you've declared that you want to be accessible so other activities can navigate to it. That's fine. Everyone's used to doing that. But it's a two-step process if you create a class and you want to go and then update the manifest. And other tools will help you do that automatically. The way we've chosen to do it is implement an attribute. One of the great things about.NET and C-Sharp is the metaprogramming capabilities of attributes. So anything that you could have added kind of weakly typed because you're just entering a string into Android manifest XML. In Xamarin.Android, you can do it strongly typed in the IDE directly in your code. So we can say, for example, that we want to set, let me zoom in there. We want to set the icon. So again, with the app drawable slash, it's an Android thing and you'll get used to it. But the label, which has got what the label of our application is going to appear on the home screen is set here. Main launcher equals true is an Androidism, which is saying to the operating system, this is the first activity that opens when this application is run. So it effectively gets an icon on the home screen. So at compile time, Xamarin.Android is taking that attribute along with various other things and updating the Android manifest so that what goes to the device or to the simulator is consistent. The activities all exist and they're referenceable. You can call them from each other. Let me just rename that as well. So I want to just quickly get the navigation working. So let's add some code here. We've already got that. And we need to declare the button. So this should look familiar to any.NET developer and it will probably look very unfamiliar to someone that's only done Java because there's like a whole anonymous class missing. One of the benefits of using C sharp to build Android applications is that we bought the C sharp isms to Android. So being able to just attach an event handler like this is quite a bit less code than the Android Java way of doing things of creating an anonymous class and implementing a method that you can use to perform the click. And we've also taken find view by ID. Again, if you're familiar with Android, returns an object. In Xamarin.Android, we've made that into a generic method so that you can always get back a strongly typed result which just saves a whole pile of casting perends every time you use that method. So little things like that, we've laid on top of the way Android works to try and make it more familiar for.NET developers. It would be unusual in.NET to have a method that you use almost every single day always returning an object. And you then have to cast it to something. Generics make much more sense for.NET developers to do that. So you'll see in the code I've just posted in there, I want my add button to start a new activity called task view. And we've already done the layout for that. But like I mentioned, layout and the actual activity itself are two different things. We need to create another class to load that activity, that layout, the task view layout. So I choose Android class, task view. And we've got, so like I said, I'm going to make it public. Because this is an activity, I need the activity attribute. And I'm going to give it a label. So again, this stuff is going straight into the Android manifest. And then I want to inherit from activity. So this is another great thing. Auto complete is auto completing for Java classes. Everything that's in the Android SDK, like I said, we surface it up as though it's C sharp. So when I'm doing partial completion, I know that that's activity. And it can just work with it. And then my auto complete is not working. Okay, let's go to the backup, which is over here and steal it. So we want to set content view. Oh, I don't know what I needed. Override on create. So again, on create methods, all the Android methods already there. That's still it. Content view. And we want to wire up the save button. And we haven't got that done yet. So what I've done so far is on the list view, we've got an add button at the top, which we drew in the layout. And on the detail view, the task view, we added a save button. And I've just added click hands to both. I'm just going to run it up so that we can see the user interface how it works with just the C sharp that we've written so far. Android emulator already running. It is not speedy. And there it is. We can add and save goes back. Nothing else is functional. But it's fairly easy to get an application up and running and start drawing widgets. Your typical hello world can be done with very little C sharp. So when I got to this stage in building the detail view, now that we've got a detail view, we need some object to display in the detail view. And for that, we're going to create a new business object class. I'm going to put it in a folder called core because I may want to reuse that. So let me just create an empty class called task. And so here we have a very simple class. It's got an ID, a title, and a done field. So this is my to-do list object. And I want to store it in a database because I want to have multiple tasks available at one time. And I want to be able to get them out of the list and mark them as complete. So if this were a database, I would probably want the ID field to be a primary key to auto-increment so I don't have to manage that and just have them automatically just work. Both the Android operating system since 2.2 and iOS for our other product already include SQLite database engine. So you can actually use the ADO.NET syntax you already know. It's available in Mono to access a database to do create table, to do select, insert, update, delete. You can write SQL for all of that in the product. But there's an alternative which is to maybe use an ORM or some way since we're working with objects, let's see if we can just load and save them without having to write any SQL. Which brings us to the Xamarin component store because the goal of the Xamarin tool set is not just to let you use C sharp to write applications that you could have written in another language. It's A to also be able to share code and be able to reach across platforms but also to speed up your development so that you're not writing the same code over and over again and also to make your development more attractive, pretty or whatever by providing components that are really easy to get for graphic widgets like charting libraries and that sort of thing, signature pad, cloud services back ends like Azure and Paz, but also SQLite library like SQLite.NET. So you can investigate the component from here. It's available for the three platforms, Windows, Android and iOS. If I choose add to app, it does a couple of things for me. It downloads the documentation. So the documentation is local. I can quickly see it supports things like primary key, auto increment, indexes and some other stuff. It also adds the component here so that I can track it. I can update it from there by returning to this screen and it also adds the reference straight into my project. So now I have SQLite.dll referenced and if I return to the task class, I can just add the namespace to get that working. The color coding may not be working there. I'm actually in an alpha version of this IDE, but I'm sure it'll work. So now we have a task class and we have some screens to implement it, but we haven't really tied it together. So let's go to task view and on create, see what we can add. So we need to, in this field, in this activity, we want a current task property. So it's a detailed view for tasks. Makes sense to have a property available where we can keep track of the task object that we're currently working with. And then we need to access more of the properties on this view. So this bit gets a little bit different to what you're used to in Windows. When you name the control in Windows in the designer, you kind of expect the IDE or something in the background to put something in a partial class or otherwise make that widget make its name available to you. So that automatically happens in SQL for a lot, automatically happens in Windows forms. In Android, it doesn't automatically happen. So we have this concept of the resource ID. So we've got done checkbox as a resource ID and we need to pull those things out manually into local fields before we can use them. So create local fields for the checkbox and the edit text. And then, so effectively what I'm doing here is what the IDE does for you on pretty much every other platform. And I'm using the resource ID that is generated behind the scenes from the view XML, getting that control and didn't notice I screwed that up. Checkbox and edit text. Auto complete is great. Title text. So now the combination of these properties or these fields and these two statements now let me manipulate the controls that are in the XML that have been displayed to the user on the screen. And what I'm going to do with that is assuming that a task object has been set, copy the title into the text field and done value into the check. So I don't know if you guys can see that. Keep forgetting. So that code there could be on any platform. It's classic, right to left assignment. We've got business object property. We want to display on the screen. We copy it up. Granted, WPF data binding does that for you manually, but everything before pretty much you're used to writing that kind of code. So now we've got that and we've got this save button. We can uncomment these lines of code here. And it's doing the reverse. So when we hit the save button, we're taking the user interface values, the values from the widget, the text and the check property and assigning them back to the task class. And then we want to save it. So we've got task, it's got some values that have been entered by the user. I've already written the code here. So there's some view model with a save property to save this object. So this is where we need to create the link between our task class and the SQLite database. To do that, we need a, I'm saying it's a view model, it's a repository. It's another class that we're going to be able to share across platforms. So task view model. And this is what it looks like. Using SQLite, using system collections.generic. So I've just, you know, quickly typed that out faster than you can see. This is the C sharp code that we need to wire up the component that I added. So we're creating a SQLite connection, an object that's going to let us talk to the database from within C sharp. In the constructor, we're doing just standard.NET file operations. So this is the beauty of having.NET framework on Android and having the.NET framework on iOS is that I can write environment, get folder path, get special folder. And on iOS it returns one thing, on Android it returns another. But in both cases, it returns me a valid path that I can write to within that application sandbox. So this code is portable, it will work on windows as well. We use that path, we're going to call it task DB, that doesn't matter. And a feature of the component is that we can just create the table with the generic parameter and that we now have a task table with three columns, just as a virtue of that call. It doesn't matter if you call it multiple times, it does its own checking to see whether the table already exists. If your task class has changed since the last time it's run, it will detect that and add additional columns if they're required. And then there's a couple of, you know, your standard crud methods, I've already implemented here just for a speed sake. Save method will detect if your object's primary key has been set, if it has been set it's an update, if it hasn't been set it's an insert. We get using some basic link syntax to make sure that the ID matches the ID that's passed in and the get all we're actually doing order by just because it's interesting using link syntax. So return all of the tasks in our database table ordered by their ID to a list that we can then just bind to something on the front end. So that's a pretty standard pattern. Those two classes, the task and the task view model, don't reference any anything. There's no using statements that are outside of what could also run on Windows, what could also run on iOS. So this code can be reused over and over again. Granted it's only 20 lines now. But if this was your business logic and your database logic and your web services logic, you would be reusing it on the other platforms. So now we can create an instance of that. And so there's the instance of it there. Actually I want that further up. So we're creating an instance of the view model that I just did. It's not really a view model, please forgive the abuse of that term. We're then using an Android feature which is how you pass information between screens. So this is not like an exhaustive introduction to Android development. But you know, activities are like screens. You can really only pass, let me editorialize, it's easier to pass just simple bits of data between them, so strings and ints. Those are called intents to the mechanism of passing from one activity to another. And the parameters or the data that you send with an intent to start an activity is called an extra. So here we've got intent, get in extra. This activity, the detail view has a parameter ID, which is the task ID. So this line of code here that you can see is just the Android way of saying what was the parameter for this screen. If it's zero, it's non-zero, sorry, we're going to grab that from the database. We're editing a task. If it is zero, we're newing up a task because we know we haven't got one and we're going to set some values on it and save it later. So that pretty much completes the editing part of the application. We've created a task class with some properties. We've created a view model or a repository that reads and writes to a database via component. And we've got an editing screen here where the view has some input controls and the code behind has code to populate that when the screen is loaded and to save it when a button is hit. So the only problem we have left now is that we haven't actually wired up the list view at the start of the application. So we won't see anything when we start. So let's go. And again, we need to manually wire up the user interface control. So create the local field, call get view by ID with the resource ID. And then again, assigning up, wiring up an item click using standard dot net syntax. It makes it really easy. We need to do the view model as well. So this is the initiation of creating a new screen. This is the type of the screen we want to create. Send this parameter. And right. And now we've got a screen. Now, not sure how many people were in the iOS talk, but one of the things that's important to understand when you're working with these mobile application frameworks, well, it's the same for Silverlight as well, I guess, is there's an activity life cycle for screens and for the application itself that helps you to deal with screen being created, a screen being visible to the user, a screen being tombstone or obscured or paused and then shown again, paused again, shown again before it's finally destroyed. They have different names like tombstone is a Windows phone specific kind of term, but on other platforms on Android they get paused on iOS, they go to the background. So you need to be able to handle that. And on resume is the Android equivalent of this screen is coming to the front, either for the first time or for some other time since it's been hidden. And that's going to happen every time you go to a task and you add a new task, you go to the edit screen and then come back. So you want those changes to be reflected, which means that on resume, we always want to grab the, grab the latest list of tasks from the database to display because something's happened, you know, this screen is coming back from being hidden and the data might have changed that we want to show. The final task we have to do to finish the application then is we've got a screen that saves tasks and we've got the task saved in a database and we've got this method here which is retrieving the task from the database, but we're not displaying them in the list view. So the final step is how do we take a list, an iList of type task and tell Android to render that? And the answer is with an adapter class, which is a very common pattern, you take something with a certain API and you kind of write a translation layer to expose it as something else. And in the case of Android, it's a really simple pattern and if you're in the iOS talk, it's really similar to the iOS data source that we wrote. So I'm going to add a new file, call it task list adapter and then just kill out a whole pile of stuff. I want to grab these just to speed things up. So the task list adapter is taking our list of tasks as a parameter, so in the constructor it's saying, look, here's the list of tasks I want you to display. And then it's overriding methods as the way of Google's API telling us this is the contract that we have with the list view for it to work. And so the stuff that the list view needs to know is pretty simple. An override that says count how many rows do I need to display. So the list view is going to ask that before it asks anything else. How big does the scroll bar need to be? How much space do I need to allocate in the scrolling portion of the list? Some other helpful information for using the data source like return the object for this position, return the position, item ID for this position. But the next most important method after count is get view. So the two things that the list view is doing when it's rendering is firstly asking the adapter how many rows do you want? And we tell it. And then it's going to loop through for each row. Okay, what do I put in this row? What do I put in this row? And that's answered by this method. So what we're going to return is a cell like a view with some text in a checkbox that represents the visual representation of each task. So I'm not going to go into the syntax in great detail. We grab the task for that row from our list. We inflate a view and just to point out we're using a built in view. So we're not creating one from scratch. It's basically just a text field with an optional tick next to it. We're then setting those values. So we're setting the text to the title and we're setting the visibility of the tick to whether the task is done or not. And we return that view to the list view. So we're basically feeding our custom C sharp class into the list view for rendering. And that should pretty much be it for our application. So let's run it up and see. Oh, typo. Let's try that again. Third time lucky. Nope. Okay, let's do, let's run this one. Here's one I prepared earlier. It's the same code except for the bit that doesn't work. And so yeah, this add task. And so let's say by milk save and by milk. If I click on that, hit done and save and it's checked. So there's an Android application in about 100 lines of code. It's very simple obviously, but we've taken, you know, what's effective, the Java SDK, exposed to C sharp and enable you to build an application where you can share your business logic, your application logic, files, storage, all that sort of stuff with other platforms. So let's skip back to the presentation. And yeah, we ended up with a working Android app in 100 lines of code. I talked a bit about sharing while we were coding. So the sharing can happen at multiple levels. The components on the Xamarin components store are often cross platform so that you can use the SQLite component, for example, on Windows, Android and iOS. You can share code that doesn't require user interface dependencies. So like system.net, system.io, web services, the database, all that stuff. You can just reuse verbatim. And I've just copied the source files across at the moment. That's the easiest way to do it. But we have PCL support coming which will simplify that even more. And you should also look at sharing the structure of your code so that it's easier for developers to maintain across different platforms. If your task screen is always called task, something, you know, you don't find yourself mixing metaphors, it will be easier for people to find their way around both solutions. And finally, sharing design. So if you find this code later on the net, you'll see that the application on iOS and on Android and on Windows Phone, it follows the same navigation metaphor where that makes sense. But it then also takes advantage of the individual platforms specialities. So there's no back button on Android because it's built into the OS and similarly on Windows. But on iOS, the back button is part of the navigation controller. So using Xamarin gives you that flexibility and lets you use the native widgets to present a native user interface which is an advantage over an HTML framework where you're sharing the same thing on every platform. We already touched on this. You can put a lot of stuff in the ShareCord library if you put your mind to it because the user interface is really the only thing you need to do. Wiring up, property assignments and validation messages is all you really need to have in your platform specific application code. Like I've already said, the web services, I.O., cloud access, storage should all be able to be written in such a way that you can just reuse it, share it, write unit tests against it in a platform agnostic way. So to recap, we built an Android app in C-Sharp with.NET. You can use Xamarin Studio as I showed you on the Mac. On Windows, you can use Xamarin Studio or Visual Studio. Visual Studio is great because then you can share your code with Windows apps really easily. You can use Resharper, you can use TFS, all the stuff you're used to within your Visual Studio environment. Apps will have the native look and feel because they're using native widgets. When Android includes new user interface components like the navigation draw, they just announced you can really quickly and easily incorporate that into your apps as well. They run natively, so I tried to explain earlier because our version, our framework is compiled and runs natively on the operating system. It's not cross-compiled or set on top of Java. You can share code and we have the Xamarin components store which lets you build applications more quickly as long as we've got some of that functionality that you need there for sale or for download for free. What's coming up next for Xamarin.Android? Probably the most exciting thing that people are waiting for is for our async await language support to catch up. The parallel task library has been available for a long time but the syntax changes around async await are currently available only in our beta platform. That stuff is going to be stable very soon and you'll be able to deploy applications with it. That will bring us back up to parity with the version of C-Sharp that Microsoft's making available on Windows Store, Windows RT apps. F-Sharp will be a first-class language for Xamarin which means you'll be able to write F-Sharp apps that run on Android and iOS. Portable class library support will make it even easier to share code across these two platforms and again also with Windows Store and Windows Phone apps. We're just in the process of finishing up the addition of the new Google Play services stuff that was announced at Google I.O. So the new in-app purchasing, location services, Maps v2 stuff is actually already available. That stuff brings the Android platform much more closely into line and in some cases exceeding what's on iOS. So the in-app purchasing in particular is a massive improvement and the support library additions as well so draw navigation stuff like that. So their platform is always growing along with Google in terms of the new functionality that they provide but also along with Microsoft and things like F-Sharp and async stuff along with the way that.NET is growing as well. That's an async screenshot if you haven't seen it. So it's really easy to get started. Go to Xamarin.com, getting started. A sample very similar to the one we walked through today is there. You can download the IDE and the tools from that one link at the bottom and run this sample for free. We've got documentation and recipes, we've got forums and all of our samples available on GitHub. And that's it for now. Have you got any questions? I've got two monkeys for the first two questions. Yep. Right. So the question is about different screen layouts when you're designing using the... So if you lock, for example, screen size and then change screen size and then you make some edits like add something or make that a different icon because of the container. So if you make changes here, what happens is under the covers, I don't know if it will have done it already, Android creates another folder, say layout, hdpi, I'm not sure that's exactly what it is. And it has another copy of your layout in there. So it effectively has two copies. And Android uses those dash qualifiers to decide and yeah, it has like a hierarchy. So it'll choose the best match. Sometimes it's really obvious because the qualifier will be for a specific screen size or a specific attribute of the device. Other times, I guess particularly for some screen size ones, it'll just be like a scale. So all devices smaller than this will get layout A and devices bigger will get layout B. So all you can do at this stage is pick a qualifier and you design for it and then the device will pick one based on what it's configured for. So if I just drop those down, some of them, so pixel density, that's going to be an easy one. Has touchscreen, probably not going to... That's an old Android hangover, I guess. But various things that you can target, some of them are more specific than others but you rely on the device to pick the one that is most appropriate for it. Yeah. So it's possible to do... It's possible to have... To write a Java library and bind it. So if you want to have a lot of Java code you've already got, you want to put it into a Xamarin.android application, you can pass things in, you can use the dependency injection to pass something in and then call back out. We don't currently support building a library in C-sharp, having a Java wrapper and then say including that in a Java Android application. But that's something that people request on our user voice site. Not currently possible... Yeah, I know what you mean. I mean technically there's probably ways of doing it but it's not part of the product that you can build that library with the tool chain as it stands. Yeah, so that question was about SDK targeting. Yes, we do. So if you go to the properties and choose a target framework and you can also set the stuff in Android manifest here as well. But yeah, so the auto complete and the compiler will tell you if you're accessing something that's not available. Is that... Everyone? Okay, well we're just two minutes early but that's fine. Thank you everyone for coming along. Can you two guys come and grab a monkey that you ask questions? Thank you.
Learn how to build an Android app with C# using Xamarin.Android. We'll create a new app, learn how Android-specific idioms work in C# and use Xamarin's tools to build a native UI, access platform-specific features and see how to deploy to Google Play. Using Xamarin and C# will also let us share code with Android and Windows apps.
10.5446/51403 (DOI)
I'm just waiting for my machine to boot up. I just finished another talk and my machine decided to have a bit of a rest in between as I walked over here. So I'm trying to get it started up again. We're going to talk about some interesting stuff today. I've given this talk a couple of times already. It's really designed around, well, I'm probably going to hurt some people's feelings in this talk. This is advice from the ASP.NET team based on things that we've seen people do in the wild. Given that ASP.NET is a fairly old framework now, it's over 10 years old. There's a fair bit of legacy. There's some security things that are interesting to talk about that have been fixed along the way, but some people may not know about them and are doing things incorrectly. There are some APIs that we wish that we'd never invented or they serve their purpose and now we don't want you to use them anymore. So I'm going to tell you about those. For every one of the ones I tell you not to do, I will give you an alternative of what you should do instead. As I said, I'm pretty much guaranteed to offend at least one person in this room, with what I'm about to tell people not to do. If I'm really good, I'll freak someone out enough that they call back to work and ask them to pull down their website immediately because they're doing something that's completely insecure. We'll get there. Come on, come on, laptop. Who was in my last session with David? There is no coding in this session. This is all me talking about APIs and what not to use and what to do. There is a PowerPoint and I link off to some examples and things. There is no coding in this demo. I'm coded out today. Come on, come on, come on, come on. How many people here have been using ASP.NET for two years or more? Okay, three years or more. Keep your hand up. Four years or more? Five years or more? Six years or more? Seven years or more? Look at the stayers. Eight years or more? Now dropping off now, nine years or more? What are we back to? That's out of 2004? Ten years or more? Eleven years or more? Came out in 2002, by the way. This is where it stops unless you use the beta. Twelve years or more? Thirteen years or more? First beta? Who used ASP Classic? Okay, a lot of people. Okay, here we go. It's up. That filled in the time beautifully. Plug this in. No God's willing. This will just work this time. I want that one. And just for good measure, I'm going to paste it out to the desktop. Oh, come on. Come on. There we go. I was tricking you. No, do not start in safe mode. Just start. There we go. All right. The last time I gave this talk, there were a lot of Scots on stage. Scott Hack, the little-known Scott Hack. This time it's just me. All right. ASP.net, don't do that. Please do this instead. Really obligatory disclaimer written by the security guy on our team that we never question about anything he says about security. This is not intended to be like the complete guide where if you follow everything in this guide, in this presentation, then that's it. You're golden and we're signing off on your application. Ship it. You're never going to have any problems. This is not that, of course. This is stuff that we would like to bring to your attention based on previous guidance that we've given you and perhaps we're changing our guidance. We only intended to call out the most common, incorrect, or undesirable uses that we encounter when we look at people's applications. People file bugs. They send up repro's and we go, oh, you're doing that. Please don't do that. This is a collection of those things. We've broken the talk into sections. The first one is around standards compliance. We're talking about HTML standards compliance. JavaScript says that type of thing first. Control adapters. Whoever used control adapters in web forms. There's a lot of web form stuff in here, believe it or not. Yes, control adapters. Control adapters were originally designed to support mobile controls. We had this fantastic feature in ASP.net 2 about mobile controls. You drag a text box, a mobile text box, onto the page and it would render different HTML or WML or XHML or XML depending on what the device was. It did that using our browser capability system and these things called control adapters. The one control could adaptively, in a pluggable fashion, change what it renders at run time. We don't want to support it anymore, so please don't use it. It's an old technology. We deprecated the mobile controls themselves, but a lot of people are still using control adapters. We would prefer that you do this using CSS, render standards compliant HTML, either using a control that you wrote or one of the ones in the box or just basic HTML. Then you use CSS, media, queries, responsive design, all that good stuff to do the mobile specific logic in your application. Who is doing the stuff on the right today? Responsive design, media, queries, great. Perfect. Who has ever used the stuff on the left to do mobile websites? Anyone? Okay, good. It served its purpose. Before we had CSS, there was a time, it was, or devices that didn't support CSS, you had to do this. Now we just sort of assume that everyone has a smartphone or at least a phone with a browser that supports some level of HTML, CSS, so don't use them anymore if you can avoid it. Okay, style properties on controls. This is code behind textbox.style.editititem template, alternate item font, four color opacity level. There are hundreds and hundreds of these properties on the web forms controls, and I hate them. You're in your markup and you're doing grid view, colon, blah, blah, blah, and then you hit E and you get a list of intelligence with 100 items in it that's just full of this garbage. Okay? You shouldn't be setting your CSS in line on your HTML anyway. We all know that. You should be attaching it via CSS. Now I would love to, in a future version of ASP.net and web forms, literally delete all of these properties. Just get rid of the style collection class and get rid of the styles property and force people to use CSS to do it the right way, quote, unquote. So please, if you are using this today, very common if you're doing things like data binding, you're handling the on item data bound event, and then you go in and you navigate through the cells, look at the value. If it's less than zero, then you say, oh, current cell.style.fourcolor equals color.red. Okay? Don't do that. Do current cell.CSS class equals string negative. Then add a CSS style sheet with a class called negative, has a semantic name, and set the four color of the font that way. Much more maintainable. You don't have colors and all types of style crap embedded in your C sharp. We will be able to clean up this intelligence in the future for you as a result. Please don't use style properties. Page and control callbacks. It's introduced in.NET 2. This is the thing that lets the grid view do paging and sorting without having to do a full page request. This is not update panels. That was different. Anyone used page or control callbacks? Okay. Also known as page methods. You could have a static method on your page, and then you could invoke it using Ajax. Okay? Kind of nice. Ish. Just don't use them. We have better ways of doing this now. These cause some issues with some of the newer stuff. Friendly URLs, the stuff we added in web forms this year that gives you the extensionless URLs for web forms, doesn't work well with this. Routing does not work well with this. If you want the modern URLs, the ability to control your URLs using routes, then have them separate to your pages and your controls and whatnot. It doesn't work well with page callbacks and controls with callbacks. Just use anything else. Anything but that. Okay? Update panel is fine. Update panel is an amazing piece of technology. It really is. I've seen the patent. Okay? It's patented. It's really cool. Just don't abuse it. Okay? Know how it works. Understand what's going on, and then use it for the portion of the page that you want to use it for, and just be mindful of it, and test it, you know, to perform this test and whatnot. Just use Ajax MVC action methods, web API, signal R, whatever it is you want to do to get that Ajax-y type non-full page refresh back, but try and avoid page and control callbacks. If you're using the grid view or a control that itself uses control callbacks, just turn that feature off or don't enable it in the first place. None of the controls will use it by default. You have to turn it on in order for it to work. So just don't turn it on. Okay? Capability detection. So we have this feature in ASP.NET called BrowserCaps or Browser Capabilities, which is this massive XML database that's installed on the server that says, hey, this browser with this user agent string is IE whatever or Firefox whatever, and it supports ActiveX, Cookies, this type of image format. It has this wider screen, blah, blah, blah, blah, blah, blah. Okay? And there are third party vendors who sell expanded versions of these databases for mobile devices and things. And people have used this for a long time, the mobile controls that I talked about before, they use this. But we generally now know that doing browser detection or capability detection from a static reference is generally frowned upon. Okay? We don't like doing that anymore. We don't use the agent sniffing. We don't test for one thing and then assume that something else will be there because that thing was there. We know that's bad, right? So we should be doing feature detection. So we should be lighting up our features in the client where we can test for those features in real time using JavaScript or clever CSS tricks. Oh, I know that this browser supports ping images because I tried to write some code that should result in this element appearing and it didn't. Or now I know it didn't support ping images, but you get the idea. So we have tools like Modernizer, great library from a couple of guys, Guy at Google, a guy to a few other guys, Paul Irish, a couple other guys. We ship it in our templates. There's a client-side library that helps you determine what are the capabilities of the current browser, but it does it using feature detection, not via some big static list of features that it understands. Okay? So please use feature detection and not capability detection. Okay, so that's everything about standards compliance. I'll stop ranting about that now. And we'll talk about security. This is where it gets a bit scary. Okay, so request validation. Who knows what request validation is in ASP.net? Okay, is that really annoying thing that gets in the way when you try and post back anything that looks like HTML? Yes, you turn it off, which is what most people end up doing, which is bad because it means that we probably didn't design the feature very well. So the idea behind request validation was that, hey, let's not have the developer have to worry about cleansing all the input coming in from the browser on the odd chance that they may echo that input back out to the browser, or they may inject that input into a SQL string. Okay? We'll detect, detect any malicious type of content. Oh, there's an angle bracket. Oh, there's a question mark after or whatever. Shut down the request before your handler even gets assigned. So it's a module that runs really, really early in the HTTP pipeline in ASP.net. It inspects the request. So it looks at the form body. It looks at the query string, and it looks at the request path. And it sees anything it doesn't like. It just kills the request on the spot. You get that yellow screen of death, hey, a potentially dangerous token was found and blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, and you go, ah, damn it, you turned it off. You never use it again. Now, we did add features in version 4.0 that let you do it more granularly. So in MVC, for instance, you could say, I just want this text box in MVC to not go through request validation, but let the rest of the page go through request validation. That was nice. You could turn it on for the whole page, but just not have it on for this control. And I'm sorry to say we actually did the work in ASP.net 4.5 to support that same thing in web forms. And then we decided that we didn't want people to use it anymore after we did all the work. So we do not want you to depend on request validation. And there's a reason. It's because request validation is a game of whack-a-mole. We find or we get reported a vulnerability where browser foo version X doesn't like it if you send in a request with a form body with this seven-character string escaped with Unicode 7 or something crazy. That gets through request validation. So now we have to patch request validation, send it out to the world. That's a really expensive effort. And we just can't win because this stuff changes all the time. There are new browsers every week, new versions that introduce new vulnerabilities, and we can't keep request validation up to date. And anyway, it's a bad idea. It actually encourages bad practice. We're giving you a crutch that you shouldn't be relying on. So what should you do instead? Who knows what you should be doing instead? Come on, I have nothing to give you, but I'll be impressed if someone can give me the answer. What's the answer? No one knows. So encoding is one, half of it. Well done, sir. You need to encode your output, whether it's going into SQL via a parameter or going into the HTML via HTML encoding, and you need to validate your input. If you're expecting a URL from as the input from some text box that the user is typed into, validate it as a URL. Don't use Regex for God's sake. Don't use Regex. We have types in.NET. One of the nice things about.NET is this big library of useful features. And then we have a type, like system.uri, which you can pass the string into. And it will tell you whether it's valid HTTP, sorry, a valid URL or not. And it is written to the spec. So if it says it's a URL, it's a new URL. If it says it isn't, it isn't. You don't have to do that. So validate the input on the way in. If it should be a number, make sure it's a number. If it should be a string without certain characters, write a parser. Maybe you could use Regex in that case. If it should be an email, what's the best way to validate email? Send an email to the address and wait for a reply. I am not kidding. It is the best way and really the only way you will ever truly guarantee that the email address that someone has written in is a valid email. It may look like a valid email, but it's not valid unless they can get an email on it. And if you've read the email spec, including supporting international characters in your email, which all modern clients do, modern mail servers do, I can type an email address that contains a domain which contains a Unicode extended character pairs. Chinese, Kanji, and hieroglyphics and all types of windings, whatever I want. Emoji characters. I can have an email totally made of emoji. Does your Regex support emoji email? I pretty much guarantee not. So all you should really do is go, it's an email. I know it has to have an at sign in there somewhere and it can't be the first character and it can't be the last character. So check for that. That's fine. Although technically you could escape the at sign. So anyway, you could do that and then once you get the string, send an email to it. And then just say to them, hey, we got your email address. That's great. Sometime in the future you'll get an email. Please reply to it and then we'll activate your account, whatever it is that you're doing with that email. And then on the way out, always encode your data. So if you're in Razer, good news. It does it for you by default. You just do at foo where foo is some variable, a string or an int or some user provided data, something you got from a database that your business users might have had access to. You need to protect against that as well. Then just do at foo. It'll automatically HTML encode it for you. There is no way they can get bad data into the page. That sort of result in a cross-site scripting vulnerability in your page. If using ASPX in version four of.NET, we introduced angle bracket percent or bumblebee as we call it, colon. So angle bracket percent equals should never, ever, ever, ever, ever be used anymore. Just don't use it. You can't do it in Razer. There's no way to say don't encode it. The way you say don't encode it is to give it a variable of type IHTML string. Then it will go, oh, it's HTML already. I don't need to encode it. You have explicitly stated that this is not a string. It's an HTML string. So don't use and colon supports the same thing. Just don't use equals anywhere. Who currently knows that they have angle bracket percent equals in their site? It's the first thing you're going to do when you get back. Seriously. And if you're using data binding in web forms and you're using angle bracket percent pound or hash and then like bind or eval, in.NET 4.5, we added hash colon, which does the same thing. It performs a data binding expression, gets the value, and then encodes it automatically for you. If you're not using.NET 4.5, I'm sorry, you'll have to manually call HTML, you know, HTML with a servi utility dot HTML dot HTML encode and then pass in the result of the binding expression. If you're using two-way data binding, the bind angle bracket percent colon, angle bracket percent hash bind, who's using that? Anyone using that in web forms? Okay. There is no way to encode that unless you're in.NET 4.5. So upgrade to.NET 4.5 and change that to hash colon bind, then you're safe. Okay. Excellent. Ah, yes, because that's required for.NET 4.5. It is. Yes. You have to actually encode the values that you're binding to. So the thing you're binding to has to already have been encoded. Okay? That's the way you'd have to do it. And don't forget about JavaScript. If you're emitting stuff from server code into a script block or into a CSS attribute into style equals, you have to do different encoding. They're different languages. You can't HTML encode in there. And we have methods for those. JavaScript string encodes, CSS encode, URL encode. If you're emitting a string into an href attribute, that's not HTML anymore. It's a URL. Again, you have to correctly encode the value to ensure that people aren't trying to pass values in through a URL. Which people do. It's how you get cross-site scripting in your site. It's a really, really nasty bug to have. All right. Cookulous, FormsAuth, and Session. Another brilliant feature of ASP.NET since 1.0, who's using, who knows what Cookulous, FormsAuth, and Session is? Okay. Who's using it? Great. I'm doing everything. Who has used it in the past? Anyone? Okay. Good. So in the end, basically, it was designed in the early days when a lot of browsers didn't support cookies. We cared about browsers that didn't support cookies. And we wanted to use FormsAuth and Session. But you know what? It just don't use it. It's insecure. Okay? You should never be passing around this stuff in the URL, which is what Cookulous does. It depends on the URL. It's just a really, really bad idea. So enable require cookies for these features. There's actually a flag that says require cookies. Okay? So you can turn that on to ensure that you'll never accidentally support a browser that turns up with cookies turned off. And then you'll start passing around tokens that are potential security vulnerability. And consider using only SSL cookies for sites serving sense of information. So that means if you're writing a cookie or using FormsAuth, then if that cookie contains sensitive info, like a authentication ticket, you should only be sending it over SSL. And the cookie should be marked as only being able to be sent over SSL. There's actually a secure flag on a cookie, and the browser will not send it unless it's an HTTPS connection. And that's all configurable in ASP.net. Whether you create the cookie yourself in code behind, there's a flag on the cookie, is secure, set it to true. Or if you're using FormsAuth or Session, you can set that flag as well in the configuration. Enable ViewStateMac. Oh boy, this one's a good one. Who knows what this is? Enable ViewStateMac. Everyone knows what ViewState is, right? The thing we love to hate? So ViewStateMac is the thing that ensures that the ViewState that was posted when you do a Form post is valid. It ensures that the ViewState came from the server originally. Because if you think about what's in ViewState, what's in ViewState? It's state to do with the controls that were rendered, or it's stuff that you put in there manually. And whatever is in that state affects what the controls do. It may rehydrate a bunch of properties that you would set after page load, or it may trigger an event and then call into your code. So you really want to make sure that when someone does a post to your website to an ASPX page, that the ViewState payload that comes up, which is a base 64 bunch of characters, we undecode it and then basically we deserialize it into binary types and then we run code. You should already be worried at this point. You need to make sure that that's a valid piece of ViewState that only came from your server. You need to ensure that it hasn't been tampered with. Someone hasn't tried to manipulate the ViewState in such a way that they can make your code do something you didn't want it to do. So do not ever, ever, ever turn this off. We should never have let you turn this off. Has anyone here ever turned this off? This is a setting in the page or in web.config. Enable ViewStateMac equals false. Has anyone ever done it? Has anyone ever admitted to doing it now that I've said that? But I'm not using ViewState as not a valid excuse. Unfortunately, we called the property enableViewStateMac, but then we used it to enable other things that don't do with ViewState, like ControlState and event validation and all the other things that web form sticks in hidden variables and then uses to sort of rebuild the page when you post back. It's incredibly important that you never, ever turn this off and in a future version of.NET, we will remove the support for this. If you set this to false, we'll just blow your application up. It is incredibly dangerous. There's actually unpublished vulnerabilities about this thing. Yeah, so just tease us forever, allowing it. So here is a public vulnerability. I get the hyperlink. Where's the hyperlink? Ah, give me the hyperlink. It's, you know what? I'll format it as a hyperlink. Seriously, PowerPoint, that was a hyperlink. Last time I did this. There is actually a public known vulnerability. If you turn this off, which it is not by default, of course, it is on by default, meaning it's secure. But if you turn it off in your app, there are publicly reported vulnerabilities that you could be susceptible to that will enable cross-site scripting in your app. It means I can make a malicious request to a page where this is turned off. I can manipulate the view state because it's not encrypted and it's not verified. It's not signed with a Mac, HMac. And then I can tell your site to do something that you didn't intend it to do. Get it to render stuff into a page for another user that makes it run my JavaScript, for instance, really, really bad stuff. Medium trust. Who uses medium trust in ASP.NET? Really, everyone's running full trust? Really? Who doesn't know what medium trust is? So,.NET supports a partial trust system. The idea of having your application code run inside an app domain that is restricted. We actually lock down what APIs are available for you to call. And we protect in the same process, so W3WP.exe, two applications running in the same process but in different app domains that are set to medium trust, in theory, it prevents those two applications from accessing each other's state or calling into each other, doing bad things, calling into each other's memory. In theory, it turns out that didn't work out so well. We publicly announced last year we changed our guidance. Please do not use medium trust. It is not a security boundary. Or any other trust level. You should just be running full trust because the trust level system in ASP.NET is no longer a security boundary. It was for 10 years and then we found out it wasn't. And there's no way to fix it. And so, we just tell you not to use it. So how should you do isolation then? If you have a situation where you have two applications, if you're a web hosting company, you have two applications running on the same server in IS and you need to protect one application from the other. Okay? That's pretty important, right? So, multi-tenoting. If I have Fowler's app and my app running on the same server and we have nothing to do with each other, I need to ensure that I can't like scan the world as soon as you're deployed to a shared host and like start reading secrets from other people's applications like their database connection strings and stuff, right? So what should you do? How do you protect against this? Application pools. Process level isolation. It's the only guaranteed security boundary in Windows, essentially. Windows is built around process isolation, okay? And so is.NET, it turns out. So place all untrusted applications into their own app pools because the application pool is the unit that then results in the actual process running. So if you go into Task Explorer, you'll see w3wp.exe. That's the process that maps to an application pool. Okay? The app runs inside that. I don't think John McCoy would agree. Okay. Okay. We should get you to introduce him to my guy. Then run each application pool under its own unique identity. Very important. Okay? So by default from IS7 and above, automatically every application pool runs in its own identity. We have those strange app pool identities which confused everybody when we released IS7. It's like, oh, it used to be network service which just worked and now it says app pool thing and like nothing works anymore, right? So everyone just changed it back to network service. Which is fine if everything on the server runs at the same trust level. But if you're trying to do multi-tenanting, every app pool has to be running as a separate user account, right? It's the only way you're going to restrict access to each other. And since Windows Vista and above, Windows is at that really cool process isolation. One process cannot call into another process at all unless they have a pre-agreed token, blah, blah, blah, blah, stuff I don't understand. And so follow the guidance. There's a great link there, Knowledge Base Article 2698981, where we basically admit that we're changing our guidance and don't use medium trust. And we show you how to set up this. If you're using IS7 and above, we have this special application pool thing that will just create, basically it creates a user account on the fly as the app pool starts. And you can use that user account to set ACLs and give your database access and all type of stuff as well. If you're running in a domain, then generally you'll create domain accounts anyway. And you'll run your app pools using a domain account. If you're a hosting company, please don't use medium trust. If you're using a hosting company and they still run in medium trust, move to a different hosting company, no. Talk to them. Take them to read this Knowledge Base Article and then get them to move off medium trust or send them to me and we'll talk to them. Okay. App settings. Great feature.net 2. So there are a whole bunch of settings that you can set in app settings. These app settings is the string based one, not the strongly typed one, right? Magic string key, magic string value, magic happens, right? Turns out it's not just there for you. We have a whole bunch of magic strings that we know about that if you put in app settings, we change ASP.net. And you'll be going, well, why isn't it just first level config? Why did you put in app settings? Why don't you just add a new element in the configuration schema to let me do this? Well, a lot of the times when we do a security fix and we send out a patch, we'll add a back door that lets you turn the patch off. We have to because a lot of these security fixes break applications, compatibility. And we have to give you a way to roll out the patch, but have it turned off so that while you're rolling out the patch, say in a web farm, where you can't roll out the patch like at once to 20 servers, you can only do it one at a time, you have to be able to in your application turn the patch off before it even exists. So we can't add schema because that's strongly typed. It has to be a magic string. So we give you the switch to turn it off. You set that in your app, you redeploy your app, okay? Then you go through and you redeploy the fix to every server. And once that's done, you just redeploy the app with the switch removed. And now you're secure, okay? That's the only time you should ever use these settings, these app settings to turn off these fixes. The only time you should ever use it. And there's a link. Hopefully this one works. They are documented. We did the work of documenting every single app setting last year because there was a lot and we added more in.NET 4.5. Come on, network, it's a really good list. Do, do, do, do. Really? There we go. Cool. So this documents all the magic keys that ASP.NET itself supports to turn things on and off. Now some of these are marked as, you know, important. This thing should only be modified by advanced developers. But of course, everybody goes, well, I'm advanced. I found the documentation. I must be advanced, right? I even knew it existed. Some of them are marked a little bit more forcefully like this. Setting this attribute to true can pose a security risk. Allow relaxed HTTP username. So here's an example of someone reported a vulnerability. We issued a patch. That patch would break certain applications that were essentially taking dependency on this vulnerability. And so we needed to give you a way to turn that patch off without you only want to install it, obviously. And so we say, please do not set this. Okay? It could pose a security risk. And they're all documented. And I will admit there's quite a lot of them. Okay? It's an old, you know, ASP.NET has been, it's a mature product. 13 years as we just heard them before. Okay? So be careful. You're all path in code. Who can tell me what this method does? It's a trick question. No one can tell me what this method does. Because the method shouldn't exist. But people use it because they think it encodes URL paths. I don't know where they got that idea. Oh, wait. I see what we did there. Yeah. Don't do it, please. This method was intended to solve one problem. And it was to solve a problem in Netscape 2 handling UNC links. We should have called it make links safe for Netscape 2 href attribute. But we didn't. We called it URL path in code. And so people thought that they could use it to encode query string values or segments of a URL and that it would be safe and protect you from cross-site scripting. It doesn't. So do not use it. Do what we talked about before. Sanitize your inputs instead. Make sure that anything that's submitted as a URL is an actual URL by using the URI class. It's what it's for. And then use URL in code, which is the method that, OK, URL in code is the right one to use. So if you need to take a value and put it in the query string of a link, you're generating a hyperlink, and you need to put a value into a query string, use URL in code, not URL path in code. That's safe to put into a query string. Do not use URL path in code. We will probably delete this method in a future release of.NET. Or make it throw or something equally heinous. OK, enable VState Mac. We really, really meant it. Do not turn this off if you take anything. If you're a consultant and you deal with customers who use ASP.NET, check this every single application that you ever lay your eyes on. Ensure that they are not setting this to false. You know what you should do? We should search GitHub and see if there are any web.config files that have this set to off and then send them a polite email. He can't message you at GitHub, can he? Send a pull request that fixes it. That's a great idea. We really mean it. See, Windows Update. Well, Windows Update that changes people's code. They'd love that. Wait, that config file wasn't like that before. Patch Tuesday and we... That would be really cool. We really mean it. A crappy clip either side. We really mean it. All right, do not turn that off. Okay, reliability and performance. So we've done... What do we do? We did standards compliance and we've done security. So that's all the security stuff, okay? Have I scared anyone enough yet that they're going to go back and change their code when they get back to their workplace? Come on, it's got to be worth my time. I see some nods. No one's putting their hands up, but people are nodding. So moving their eyes up and down or looking away like this or looking like this for the other people who have done it. So good. I got through to some people. Okay, reliability and performance. Whereas these two events, pre-send request headers and pre-send request content, anyone using these, they're pretty rare. Okay, someone put their hand up. Okay. Try to avoid them. Registering for these events from within managed IHDB module. So if you're using them from a native module, that's okay. ISNative module. Okay? If you're using them from a.NET, ASP.NET, IHDB module, they have issues. Okay? So use the native ones instead. I don't know what the issues are. My runtime guy said, don't actually give me notes. No, he didn't give me notes. That's a shame. But don't use them. They cause issues with asynchronous requests, doing overlapping things and like sending headers at the wrong time in the pipeline. After headers have already been sent, it's possible to do it using the managed API. The native API just solves all that for you. So don't use them. Even if you're not using them directly, you may be using a component that you bought or downloaded that is. So anyone here ever used a large file upload component for ASP.NET? Anyone? No one? A couple of people hands, okay. A lot of those use these events in the past, okay? Because they want to do stuff just before they send a request back or as the request is coming in. So, okay, async page events. So this is in web forms to do with the task-based asynchrony that we added in.NET 4.5, which I spoke about here last year, I think, or the year before. Try to avoid writing async void methods. So you can do protected void, protected async void page underscore load and then write a wait code inside your page load. And that does work, okay? Work for the simplest of scenarios. But what we want you to do really, and same for button click handlers, you can do like a protected void, async void, my button underscore on click, some async code. And it kind of works, ish, in the really simple demo cases it works. As soon as you do anything more complicated, you can introduce race conditions in your code. So what you should be using is the first class API for telling the page that there is going to be async work. And that is page.registerasync task, which has been around since.NET 2, okay? It's been there forever, but we updated it in 4.5 to support task returning delegates, asynchronous lambdas, okay? You should be using that. And what that does is it queues up the async work, when you call it from page init, page load, button click handle or whatever it is. And then once you get to pre-rendered complete, then it runs the async work in a coordinated fashion. And because that method that you registered with that method, you have to give it a delegate, that must return a task. We have an object that will tell us when that async work is finished. Async void, we have no way of knowing when that async work is truly finished. We can kind of track it using the synchronization context because it raises an event when async work starts and then when it finishes. But if async work kicks off more async work, that kicks off more async work, then that can just all fall over, okay? But if you use registerasync task, it just works. And do make sure that you set HTTP runtime target framework equal to 4.5 if you're doing any type of task async in web forms or in MVC or web API or signaler, okay? If you're running on 4.5, make sure you set target framework equals 4.5. That flips in the new synchronization context that we added in.NET 4.5. It's opt-in. We don't do it by default because it changes behavior of async work, okay? So you need to turn that on if you're doing anything with tasks. Now it's good on net. If you do file a new project in Visual Studio 2012, it's turned on for you by default, okay? Fire and forget work, try and avoid having code in ASP.NET where you handle a request, so in an MVC action method or in a web forms page load or an Ajax handler or something. Try and avoid kicking off fire and forget work. Thread pool queues a work item, starting a timer that calls a delegate every so many seconds from within ASP.NET. The reason is at any point in time we may decide just to completely destroy the app domain while that async work is running. We generally won't do it while a request is running, but if you fire off fire and forget work, by definition the request will be over very shortly afterwards and that work will keep going, right? Because it's fire and forget. You don't care about it. So if you do that, we can just tear down the app domain and you'll get all types of strange exceptions in your background work and we may even corrupt your state. And that can lead to really bad things. Give you a writing software database in such a way or a text file and now we've corrupted your text file. Okay, because we just literally crashed your thread. If you want to do background work in ASP.NET, first of all, don't. Move it to a different process. Write a Windows service or if you're in Azure, use a worker role, queue it and then have some other process pick it up where you manage lifetime yourself. If you absolutely must do it inside ASP.NET, you can check out web backgrounder. It's a NuGet package that Phil Hack wrote that lets you schedule background work in ASP.NET, work that happens on a background thread inside the ASP.NET app domain, but it listens for the correct events. So when the app domain gets torn down, we give it a chance to finish the work gracefully before we destroy the world underneath it. So look at web backgrounder if you need to do that type of work rather than just starting timers in AppStart or doing thread pull.q as a work item, okay, use web backgrounder. Okay, the request entity body. Try to avoid reading request.form input stream before the handler. This is pretty advanced stuff. So if you're writing a handler and you want to be able to get the actual network stream that represents the request coming in from the client because you want to do large file upload. You can do that like the stream as it comes in to do whatever it is, log it out or something like that. You shouldn't really do that before handler execute, okay? You don't want to do that. Or request.form, by the way. You really want to try and defer it to when handler is executing. Handle execute is the event where your action method in MVC runs or your page in web forms runs like page.NET will fire, okay? That's the handler. The page or the MVC controller is the handler and handler execute will execute those things. If you read request.form or input stream before that, either from a module, that's generally where you'll do it from, from a module, then you can cause issues. What you want to do instead is use these APIs here. Get buffless input stream and get buffered input stream. So get buffless input stream was added in 4.0 and get buffered input stream was added in 4.5. And they do what they say. A buffless input stream gives you the raw stream from the request, which you call before handler execute, but be warned. If you call this API, you're telling ASP.NET, I am taking over this request completely. I don't care what was going to run after me. I am in charge of this request from now on, and I'm going to read from the stream manually, which means things like request.form won't work because they usually get populated by ASP.NET by reading the stream. But if you start reading the stream, buffless input stream, then you read the stream. You read the bytes into your own variable, which means no one else can read them. It's a stream, right? Once you read them, you can't look at them again. It's not a buffered stream by default. It's to literally a network stream. So if you call this method, you can't call request.form. You can't call request.files because that won't work. If you call get buffered input stream, you can. Get buffered input stream will give you the bytes as they come up, but it will also buffer them away into memory or spool them to disk so that if later on in the request lifecycle you do call request.form, it will still work because we took a copy of it before we gave it to you. Okay? So low-level APIs, but we've seen issues reported where people had called these events and then they called server crashes and types of things because they're doing things in the wrong order, or they're doing things after certain things have happened or things like that. Anyone actually using these? Anyone actually used done input stream processing in ASP.net? Okay, so that all fell in deaf ears. Never mind. If you ever need to do it, check out those. Okay, response.redirect, and this is more of an be aware rather than a don't do this or do something else. If you call the overload of response.redirect, which just takes a string, that cancels the request. So when you call response.redirect string, the next line of code after that doesn't run because we call response.end, which throws a thread abort exception synchronously inside your code and literally tears your function call, your stack in half. Okay? You were calling this function, you were this point in the call, you called response.redirect, and then we just canceled the thread by calling thread abort so we could return it. Okay? Which may or may not cause really odd things to happen in your application, depending on what you were doing. Okay? So we call it a function type of state in memory and then we just kill it on the fly. For async when it's handlers, response.end does not abort it, but code execution continues. So the behavior that differs whether you're doing async programming or not. In async programming, we change the behavior. When you call response.redirect, sorry, when you call response.end in async programming, it doesn't actually end. We let the rest of the request run through and then we let it finish. So basically don't recall response.end from an async handlers. To end an async response, you have to return the task that you returned originally from the async method that we showed in the signal I talked just before. That's when the request will end, only when that task finishes or if you're using the old pattern when the end method, when you call the callback that was passed into before, into begin, sorry. So response.end doesn't do anything in async. If you want to redirect the response, you should be using the appropriate method given to you by the framework you're using. Now if you're using web forms, response.redirect is fine, but you just need to be aware of what's going on. There is an overload of response.redirect that takes a boolean, which you can say redirect, but don't thread abort. Redirect after you've finished executing the page. In MVC, you'd return a redirect result. Do not call response.redirect from inside your action method. That is absolutely the wrong thing to do. You need to return a redirect result from your action method. And then the MVC pipeline will see that at the appropriate time in the lifetime of the request and then send it back. Now remember, this section is about performance and reliability. We're not saying that this is going to cause issues immediately, but it does make your app less reliable. It may cause issues under load. Having some of these things introduces certain race conditions or possible memory corruption or stack corruption problems that you may not see until you get lots and lots of load or you may see one in a million requests. But they can be the hardest bugs to track down. I'm sure you all have experiences of looking in the event log and seeing an obscure error from ASP. You have no idea why it was there. You never saw it again. You see it once every week and you don't know why it's there. A lot of the time it's these type of things. You may not be doing it. You might be using some code, a component that you bought or something you downloaded from the Internet that's doing it and you don't realize it. Enable ViewState and ViewState mode. Try to avoid enable ViewState. Enable ViewState has always been there. It's how you turn off ViewState. It's on every control in the hierarchy and web forms, but when you turn it off, there's no way to turn it on again underneath that control. If you turn it off at the page, it's off. You can't turn it on for one control. ViewState mode, which we introduced in.NET 4, lets you do granular ViewState control. You can set ViewState off at the page level, which we thoroughly recommend, by the way. Just turn ViewState off by default. Set ViewState mode equals to disabled at the page directive level. At page, ViewState mode equals disabled. That will set ViewState off for the page. Then on the controls that your testing has determined need ViewState to work, turn it back on just for that control. That will save you an awful lot of ViewState that you otherwise would be taking up, which is one of the biggest bugbears that people have with web forms. It's really easy to fix in.NET 4. Just turn it off and then turn it on for the controls that need it. You'll see your ViewState shrink dramatically. Has anyone doing this already? People know about this already? No? One person? Two people? Good. Okay. SQL membership provider. This is the inbox provider that we shipped forever for doing membership support in.NET, using SQL Server. We replaced this with an out-of-band component called the universal providers, system.web.provider. It's a new get package. It's used by default in the templates since, I think, Visual Studio 2010 Service Pack 1. It's in all the templates in Visual Studio 2012. It works with all databases that Entity Framework supports because it uses EF. It doesn't talk to SQL directly. It uses EF. It'll work with SQL, Azure SQL, SQL Compact. If you have the Oracle Entity Framework provider or the MySQL Entity Framework provider, it'll work with those as well. Just be mindful of that if you're still using SQL membership provider. The other thing is that we made some improvements. The universal providers have, they're better. If you're still using SQL Server, these ones are better. They don't use stored prox. They're a bit more flexible. They're faster. They're better written. We can update them out of band. They're not in the framework. They're on new get. They're easier for us to improve as we go forward. If you're deploying to Azure, you really need to use these ones. Long running requests. That is requests that run longer than 110 seconds. Really, any request that runs longer than a couple of seconds is by definition a long running request. Requests generally should be over very, very quickly. That's what we want our things to do, be quick as possible. That's how HTTP was designed to work. Some interesting things happen when you let your request run over 110 seconds. By default, that's the request timeout setting. Depending on what framework you're using, whether you're running a raw handler or web forms or MVC, we may or may not just destroy the request and give you an exception, request timeout exception. Things like the session object. Who's using session state in ASP.NET? Be grudgingly. Session state in ASP.NET is not my favorite feature. I own ASP.NET as a feature-wise. I don't like it. Long running request. Long running takes a lock on the session object. When two requests from the same user come in at the same time, perhaps because your page makes two Ajax requests or while the page is still loading, they hit a button which makes an Ajax request, we block the second request while the first one is still active because we've locked the session object for that user. A lot of people have run into this when trying to do long polling in ASP.NET in the past. After 110 seconds, we release the lock. Maybe that's a bad time. Maybe you were in the middle of doing some session work and now suddenly we release the lock and all hell breaks loose. Your session stuff that you were saving doesn't get saved. Or the read that was going on suddenly happens when it should have been blocked. So don't do it. Don't try and have long running requests that are greater than 110 seconds that use session state. Just try not to use session state if you can avoid it. Also don't perform blocking IO operations. I think everyone's aware of what a blocking IO operation is now after two years of node being on the scene. We've talked about async a lot in the last two years. We now have async APIs for all of the type of IO that you would want to do in.NET, whether it's file, network, web service, socket or database as of EF6. ADO.NET already supported it but EF6 supports it at the ORM level. So you can do asynchronous OI now from web forms or from MVC or Web API signal and free up that thread. So for long running operations though in general, you better off to use web sockets if you can or signal R. Just inject signal R anywhere web sockets is said. As it has a much lower per request overhead. The other thing a long running request will do is use memory. Even if it's async, an async request that's long running won't use a thread when it's idle but it still uses memory. I mean there's still data stored in memory that represents that request. But a web socket request in ASP.NET, special API introduced in.NET 4.5 is a much lower per request memory overhead. When you call accept web socket request and you give us the delegate, we unwind the request, free up most of the memory to do with that request and then we invoke your delegate with a restricted context, not the full HTTP context, just a small set so that we free up a lot of memory. It's about five times less than a full request. Now signal R just does the right thing. If you use signal R, we use web sockets when we can. Otherwise we use long polling or forever frame or server center events and it's async so we just try and do the best thing we can. And that is the end of my talk with other people that I didn't do the talk with. That's my boss, Cool School. His name is Carl Scott Hunter, Cool CHR. And you all know Scott Hansman. So I have two minutes. If anyone has any questions, I can answer them right now on stage or ping me on Twitter and I'll do my best to answer them. Otherwise thanks for coming along. Any questions before I pack up? No? Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome. Okay, awesome.
ASP.NET’s been around for a number of years and the team’s developed some DOs and DON’Ts. Let’s explore out very best list of DON’Ts that you can apply today at work! Come see Damian Edwards, Senior Program Manager on the ASP.NET team, share internals, secrets and not-so-secrets of the ASP.NET APIs that you should be exploiting and the ones you should be avoiding.
10.5446/51406 (DOI)
What? On, on. Can you hear me? Hello. Yeah. Hello, hello, hello. Excellent. The timer has started, so we're going to get going. We are deviant fallowards. No. Someone created this account. We have no idea who created this account. We suspect someone at work. Andrew. Just likely. Andrew's going to get Andrew. But we don't tweet from this, so don't follow this account. I'm sure some of you have already. 67. If I refresh this page, is it? Yeah. Who followed? No, you can follow us at Damian Edwards and at David Fowle on Twitter. And we're going to talk to you today about Signala. Who has seen a Signala talk before? Who saw one at NDC last year, like Age Match or the year before? Okay. Who has written like new Signala in their apps or fantastic? Yeah. Okay. So you've all, you're all familiar with Signala then. Hopefully most people. For those who are not, I'm going to start out doing my normal demo. It takes about five minutes for people who haven't seen it before. And then we're going to do something a little bit different in this talk. We're going to write Signala from scratch on stage, like the framework itself. With the idea being that if you know more about the internals of how Signala works, then you'll have a better appreciation of the pitfalls, the benefits, how it's working on the covers, so how you should be using it correctly. And you'll be able to answer all these great questions about Signala. So I'm going to start out with my classic move shape demo. I'm just going to go up to NuGet and get Microsoft.asp.signala.sample. And I'm going to get an exception from the NuGet client, which we'll ignore. Do, do, do, do, install the world. I accept. Do, do, do, do, do. And what this is going to do is bring in a fully functioning Signala sample from NuGet into my app. So over here on the right hand side, we can see it's brought in Signala.sample. And there's a file called Stocktickard.html. So I'm just going to open that up and I'm going to run that. Pin that to the right hand side in Chrome. I'm going to open IE, put it on the left hand side, sorry, and then put this one on the right hand side. And hit Open Market and we'll see that both screens update at the same time. Okay, this is real time web. This is kind of what the point is. I can hit Close Market over here and they both stop at the same time. So I have this synchronized view from two different clients about some real time data. So let's write something from scratch rather than using something that's pre-canned. So let's do Add Folder. We'll do Move Shape. And then in here I'm going to add a server class called Move Shape Hub. A hub is the intrinsic in Signala that is like a higher level programming API, like a controller. So I'm going to say public void Move Shape. This is going to be invoked by clients. Move Shape is going to take in an int, which is an x and an int, a y, which is the position of the shape. I'm going to decorate this with an attribute which controls how it's exposed into those clients. Like what name is this hub going to be known by to its clients? And when that comes in, I'm simply going to do a clients.others. Tell them that the shape has moved to this new x and y position. So that's my server side code. Let's add some client side code. We'll add an HTML file, index. Drag in a bunch of references. So I need jQuery. I need jQuerySignala. I need a special reference, which is script. Not script. Source equals slash signal r slash hubs, which brings down that cool proxy generated for me at runtime by Signala. That gives me JavaScript I can use in the client to call out to the server. And I also need to go back out to NuGet at this point and get jQuery UI. I need that in order to perform this demo. So let's install that. Okay. Let's add that one in here as well. JQuery UI. All right. So let's add some code in here. Function. Runs when the document loads. Get a handle on that hub. $.connection.moveShape. So that was the name that I gave it back over here. So now I have a handle on that from my client. And I'm going to get a handle on a DOM element with an ID of shape, which I'll create in a moment. I'm going to add the client side function that I was calling from the server. So that was shape moved is equal to a function that takes an X and a Y position. Okay. So that again over here, others dot shape moved. I have to call that function from the server so it has to exist on the client. And when that happens, I'm going to move the shape around and say shape dot CSS. Left is equal to X and top is equal to Y. I got a bug. I got a bug. Shape. Shape. It should be dollar. It should be pound. How shape. Thank you. And then I'm going to start my connection. This is why it's nice having two people on stage doing this. $.connection.hub.start. That starts the connection from the client to the server so that I can actually send these messages back and forth. And when that's done, because it's asynchronous, I'll go ahead and wire up the client support. So that's going to be shape dot draggable. That's why I needed jQuery UI. And into that, I'm going to say when the dragg event takes place, I want you to go ahead and call the server. Oops. Let's do that. So I need to go ahead and call the server when the dragg event takes place. So I'm going to say hub dot server dot move shape, which was the method I added up here. Move shape. And let's pass in the left and right position. So that's this dot offset left and this dot offset top or zero, depending on whether it's been initialized yet or not. So that's my JavaScript. Let's add that UI so we can see it. Div ID equals shape. Let's add some style. So we'll say hash shape. And we'll give it a width of 100 pixels, a height of 100 pixels. Oops, not that one. Background color of that blue over there and a cursor of move. Okay. Hopefully I've made no mistakes in the first five minutes of this talk. So there's move shape on the left in Chrome. Move shape on the right in IE and I pick it up and move it and it moves on the other side. Okay. So that's sort of the quintessential opening demo that I generally do with SignalR. Now that you've seen how it works, let's throw it away. And let's write a SignalR library from scratch. And for this, Fowler is going to do the first part of this demo. All right. So I want to build a SignalR server from scratch starting from an empty web project. So I go here, create a new project, empty application. And I have two helper files. It's not from scratch, it's kind of from scratch. There are two kind of pieces that I created. Wait, you're not going to write everything from scratch? Most of it. Most of it. It's still legit. I'm going to write all my bit from scratch just so you know. No bugs. Well, the way I figure it either works first time and there's rousing applause. Everyone's amazed that we were able to write 300 lines of code with no bugs. Or there's problems. And then we debug it live on stage, which everyone loves as well. And we finally get it working with some active debugging. And if that all goes pear shaped, I just throw books at you. And you're all happy anyway. So how can it possibly go wrong? Beautiful. Good. All right. So I have this awesome region here with some complex code. The bus is the heart of Signler. There's two methods on it. There's publish and subscribe. So it's simple pubsub. What were those methods? Publish and subscribe. Publish and subscribe. And subscribe. Yep. So. I'm going to slow his Bayesian accent down and I have to. So pub this takes two things. A signal and some data. We store it. It's not from string to callbacks. So if you publish, it goes over every callback and so reach one on calls method on the callback. This just looks like pubsub. Yeah. It's pretty basic besides all the complex locking and lock exchange and. All the async trickery. Yeah. All right. So subscribe takes a bunch of signals, a cursor and the callback and this thing called terminal, which we'll explain later on in the talk. The cursor lets you know where you are at in the screen of data. The signal. So whenever someone publishes to foo and I'm subscribed to foo, I get foo callback on my subscription. So do we all get that? It's just pubsub, right? We like I subscribe with a string saying, hey, I care about foo and I care about bar someone else and give you a delegate. And then later on someone says publish his to foo and I get that message because I subscribe to it. Very, very simple at its core. And you see signal R signal getting closer, right? Yeah. Response. A few helpers I added. I have to add json.net. You're new get. So we're going to talk about what we're just now about, which is the best. We use json.net, but you know, use whatever makes you happy. So there's right json, just pretty simple to write SSE. SSE stands for server cent event. That's a protocol that is actually done by the W3C server cent event. It's a pretty basic protocol that describes two things, a client-set FBA, which is called event source and it describes a server-side format for framing data. So it's pretty simple. It's this data and the actual data and then two new lines. So this is a web standard for doing server push. Server push. It should be streaming. This must be really new. Yeah, well, kind of new. When was the, what's the date on this spec? 2009. Okay. Yeah. So this has been around a while actually. And everyone supports this? Except for, hey, hey. That's okay. That's all right. It's all good. So we'll start by writing a persistent connection. Spell it correctly. So we'll derive from HTTP task async handler. This is a new type in. Dynet45. That's task-based handlers. Before you had to write one of these, HTTP async handler. And it was beginning. And so it was painful. Horrible. Who's ever written an async handler in async.net with begin and end? I pity you. Horrible, horrible API. Task is much nicer. So if you use the learn before, you know that the main thing you do is the first thing you do is negotiate. So I'm going to create a simple advanced routing, right? Look for request URLs. This is our real developers do routing. Beautiful. Look at that. Create any fancy token replacement. That's just heavy, right? It's all weight. It's beautiful. This is the lightest you could do. So whenever you negotiate, you get a new connection ID from scratch. Make it random. It'll be a new grid. Beautiful. And I will write this out through the wire as JSON. And for that, I need our helpers. I'm going to use out of using. What's this? At that, then write JSON to prepare. I can pass the negotiation there. And I can do set the content type. This application JSON. And we're done. We return an empty task. For result. So now I should be able to get a request to a URL, which I don't have yet. I have to add a new. So we're fake async at this point. We're returning a task, but there's actually nothing async going on here. Totally synchronous. It's just totally synchronous because we're not doing anything that's worthy of async yet. But this handler is going to be used for multiple things, which we'll see in a minute. You have this classroom there. So I have a new file that I can run. So you just add you had to chat.asahjax. So now I can just do write from our handler. And I can run this. You're missing a T to persistent connection. It's all going to work though. Beautiful. Yeah. It's fine. It's fine. But I have to fix it. Very factoring. Beautiful tooling support and everything. All right. So chat. But not cat. We should call the cat. Look at that. Yeah. All right. So that was boring. So we've basically just done Ajax using a handler so far. That's all we've really done. No, the fun stuff. Whenever a client connects via signaler, we actually get stuff from the query stream. The client sends three things. The transport. And two more things. The connection ID that we got in negotiate. Your short gets stuck. I have a real keyboard. That's why. And the cursor. We call message ID. All right. Now we'll create a list of signals. Remember in the bus you saw the subscribes of open signals. I'm going to create those here. The list is actually two things by default. Get type.fullname. And it's also the connection ID. So one thing that's interesting is that in signaler when you publish to all connections or to one connection, you're actually just doing a publish to a specific string called either like a type name like chat or a connection ID. That's a grid. Or a group name. Or group name. Yeah. So now I'll switch on transport. Now implement service and events. Return handle. Let's see. Now pass in the cursor. And the signals. Generate that. If I don't have an actual transport I know of I'll throw new not supported exception. All right. So the code first thing I have to do is actually get the bus. So let's create a private static read only message of us. It can be for requests. It's one for the entire world. So this was this type that we looked at before you pasted in. Okay. Yeah. One that should be hidden from people. That one. So when someone comes in with this data, we will subscribe to the bus passing the signals, the cursor, and we're actually going to have a callback here. Returns a bunch of messages received by the actual subscription and the index which that we're up to. So this thing returns disposable. And whenever someone calls publish, this gets called back. So it's pretty simple. So we have an I disposable now that represents our subscription. So when we want to cancel the subscription, we can just dispose it and then the subscription goes away. Okay. So now whenever something comes in, I want to create a response to send to the client. So now we're inside that lambda, right? We're inside that callback. So I have two things. C and index and M messages. And I'll just do a response.write just necessary our helper. I'll write a payload over the wire. And we're almost done. Not quite yet. Whenever in ASP net, when you're actually doing anything, you have to return a task. So I'll create this thing called task completion source with less this decide when task is going to end or not. So I returned this. Do async here. Anyone use task completion source before? So this is what you use when you're when you are writing your own async API rather than just consuming async API's you use task completion source to create the task that you're going to return. And then at some point in the future completed or faulted or whatever it might be. So I wait this task and then when I finish, I'll dispose the subscription. So the interesting point here is that when you're doing async in ASP net, especially async handlers, you're basically saying to ASP net, I am now in control of the lifetime of this request. Normally in ASP net, the request comes in the process is your MVC action or whatever, and it returns and the request is gone. With async, you give ASP net back a task. And ASP net will not destroy that request until that task has finished, whether it's completed or faulted or canceled. And so if we don't do this here, if we don't finish the task, we'll just leak this request forever. No one will ever finish this request. Even if the client disappears and flies to Haiti, this thing is going to live forever. So we need to shut this thing down. So to fix that and done it for five, there's a way to get called back whenever a connection dies in ASP. And it's called the client token. And this is a cat vision token type. So I can say register. When this fires, I'll end the task. So when would this fire? So if I start my browser, connect to this endpoint, and then I close my browser, this will fire. It'll fire. Okay, if I hit F5, that one will fire. So we're not done yet. This is the general frame. I need a few more things. This magic function. So in IS, there's this thing called dynamic compression, and it buffers by default. So it screws up streaming calls to the client. So we have to remove this magic header. Anyone ever battled trying to do HTTP streaming and then having something do compression or buffering in between the server and the client? You see a huge lag. You see this massive lag because the thing in the middle says, yeah, you might have bytes, but I don't think there's enough bytes so I can compress well yet or efficiently. So I'm just going to hold onto them until you send more, which can be really frustrating when you're trying to stream data down to the client. So we just remove this header which tricks IS and just saying, oh, the client doesn't support GZIP. Therefore, I'm going to know up. So we just remove this header. All right. I was at the content type to text slash event stream for the client to actually pick it up. And to make the process bootstrap quickly, we'll have to write out a JSON payload. The client to trigger actually the events on the client side. And this is the entire thing. So it's not easy. I joke. So we can receive now. Let's add sending. Add a new endpoint. With our complex writing again. And we'll do this. And all this will do is call, get some data first. So when someone sends from the client to the server, it'll be a form post. I'll get it from the form. I'll get the field called data and then I'll call on received connection ID data. Generate that. And then this will be protected. Abstract. So we can already in our chat class. I'll make this abstract as well. Build it. So we have a handler now that it actually handles three distinct types of requests. We have that negotiation request. We have the long running request, the service and events request. And we have sends. So when someone wants to send, it's the same handler for all three. So now chat needs this thing to be over because it's abstract. So if you ever use this class before an actual signal, it looks kind of familiar. Anyone use persistent connection in signaler? So I showed hubs before, which is the higher level, persistent connection is the lower level. It's quicker for us to write on stage. So just as functional as we'll see. I'll call broadcast. Pass the data. And then I have to write broadcast itself. Let's put it on the connection itself. Here. Check to it. I'll just return. Can anyone tell me what to put here? What line of code goes here? Any guesses? Any guesses to what I'm going to put here? All right. I'll help you up. So you can tell that when you publish to any endpoint, like I said before, to a connection or to all clients or to everyone or to a group, you're just doing a publish to a specific signal name. And this type name we subscribe to if you look right here. So when you look at chat, the type name would be web application 38.chat. It would be the full signal name. And when you call broadcast, it will broadcast to everyone connected to that signal. So now we have a full working server-side implementation. Let's add a client to test it out. So I will go to the same point, copy this connection ID, add to the query string, connection ID equals this and transport equals server-send. So now you're making that long-running request. Yep. Right? Okay. And you see you've got the data, the first set of data. Let's use our awesome client, Fiddler, to send data to this connection. I think this is the most convoluted chat demo that's ever been written on stage. Usually we try and do it in five lines. I think we've written about 500 now. This is to make you appreciate signal. Okay. So now we're going to send endpoint and I will do a post and the data will be hello, and you see the content type equals application dot x dot, dot, dot, dot form dash URL encoded. All right, I'll zoom out and I will send. Oh, so top left-hand corner, just zooming up there. So there's our data coming down our live stream. You can see the little spinny is still going in Chrome because we're streaming this down from the server. And to prove it's two connections can work. It's not just one connection. I will copy the same URL and use another client. Demi hasn't get bash because it doesn't have curly installed normally. I don't use Kale. I'm not a hacker. How do you live? I saw it in a movie. That's all I've ever really known of it. I saw it in the Facebook movie. Oh, yeah. Right. He used Kale, right? To download the images of all the guys and the girls from the opening scene. And it got to both clients. It's in the full broadcast. Awesome. That's cool. That's service and events. One client, two clients, different clients, a curl and the browser. So what if my browser doesn't support service and events like IE, which we mentioned before? SOL. And you go use none of it. So SignalR also does fall back. So we'll implement two transports. Long polling. Return handle. Long polling. And it's the same input as service and events. Put this down by service and events. I'm using a new feature in VS to share code efficiently. Just copy all this. Oh, well, I don't have to type it all again. I mean, I need to update to this version. This is awesome. This is better, right? This is so good. Oh, OK. That's why I don't have it. I think I installed it. Awesome. To make this work, there's a few changes I have to make. Remove this line and this one, because we aren't streaming anymore, it's a simple request response application. JSON is the content type. And then there's a few more changes. This has to be async. And this is no longer async. It's just JSON. And the request ends in long polling when you actually get data. So the request is still long running, but only while there's no message. Once I get a message, I write it and then I just finish the request. Correct. I see. So what happened here is you can end it two ways. If you get a message or if the browser closes and the client goes away. And there's one more thing I have to add. Termally, it's true. This tells the bus that you can only ever invoke this callback once and then you stop firing stuff. We actually had some pretty nerdy bugs yesterday in the evening and we were freaking out and had to lock everything so no, it's all solid. That's why that code is in a region. It works beautifully. That's it. That's long polling. Done. Done, done, done. I didn't explain anything wrong, right? I think so. It builds, right? Okay. That's the long polling work. Let's refresh this because I just started the server over. So server 10 events here. Curl we're going to use long polling. All right. Now Fiddler again. We will say hello, NDC three. Yay. And we pull again. And we get NDC four. We pull again. NDC five. That's how polling works. You're creating a new request which is, there's no request right now, right? Okay. So you create a request, then a message comes in at some point from Fiddler that returns the request and then you have to go and start it again. Okay. So what happens then, I mean there's no request right now. So if you send a message now, the server center events is going to get it, right? Six. So we run that in server center events but we don't have a request for long polling. You send seven. So what do we do if I now, how do I come back? So if we run that again, what happens? That sucks. So we get nothing. Okay. That's not good. That's not good. So that thing called the cursor. We have this thing called the cursor that tells us where we left off. So you can see that we got up to cursor two. So here you just say give me everything since two and they get a response with everything that I missed. Ah, I see. So it's buffering on the server. Yeah. Okay. But now it's finished again though. So I'd have to reissue now with a new cursor. A new one with four because it told me I got four. Four was the last one. And now you can go back and send. It's not that I sent a new one. I see. Zoom out. Then we get that in both places. Okay. That's a server. It looks like it's a functioning server. Yeah. So as great as that chat client and app is, I think we could make it a bit better. Your turn. Okay. So let's kill Fiddler and let's kill curl because it's impressive as that is. I think we can, we all know that one of, you know, SignalR is a client and a server library. We've done the server now. Let's go and build the client library so I can actually build a real application here. So I'm going to pull jQuery into this project now because it's so much easier running this with jQuery's help. Let's do that. Okay. Come to my script. So I'm going to add a new JavaScript file. We'll call it signalR.light.js and I'm going to drag in jQuery. So we're going to write a signal, a jQuery extension essentially to act as our client-side library for SignalR. It's going to pass in window and window.jQuery. And I'm going to take that in as window and dollar. Who's ever written a jQuery extension plugin done this type of modular JavaScript development? Okay. I'm going to do a bit of scaffolding here. Last thing I'm going to do is export my extension. So my extension is going to live on dollar.connection. Okay. I don't have a variable called SignalR yet. So let's create some variables. I'm going to have a bunch of transports like we had in the server. And I'm going to have SignalR. SignalR is going to be a function that takes a URL. And it's going to have a prototype. And I'm going to use the, I call it the super constructor pattern, which I think is what Crockford calls it. I am just completely misunderstanding what he called the super constructor. And I'm going to return from this up here, sorry, I'm going to return SignalR.prototype.init URL. And over here I'm going to say this.url is equal to URL. So in this case, the SignalR function isn't the actual constructor. The init function is the constructor. And the reason I do that, and the one last thing I need to do to make this work is say SignalR.prototype.init.prototype is equal to SignalR.prototype. Can we use TypeScript? What do you mean? That's beautiful. I love JavaScript. That's awesome. So what that lets me do is now down here I can say varcon is equal to dollar.connection, my URL, and I should get IntelliSense. So if I do, if I don't screw up the editor like this, I can do condot and it doesn't know what it is because I've done something wrong. I should be doing new because that's actually a constructor. JavaScript rocks. See, IntelliSense helped me there. So JavaScript and IntelliSense does help sometimes. So we can see now I don't get all those yellow things saying, I don't know what's going on. And I have an init method. So that VS knows that the right stuff is going on, which means I've got my crazy super constructor stuff wired up correctly. Okay, so that's good. I need some transport. So let's go ahead and create our transport. So I'm going to take transport.underscore and have some common logic for transports. And then I'm going to have specific transports. So transports.serverSentEvents is going to be equal to an object. And it's going to have a name, which is serverSentEvents. It's going to have a start method, which is going to take the instance of the connection to start, and it's going to have a send method, which is just going to delegate directly through to the common send method, which we'll write in a minute. And I need two of these because we saw we had long polling as well. So let's duplicate that. Long polling here. Turn that off. Long polling there. That didn't work. Long polling. Let's go ahead and add this common logic. So I need a send method, which will take a connection and some data. And I need an onReceive method, which is common again between the two, which again takes connection and data. So what is send? Send's pretty straightforward from the client. It's just an Ajax post. Now, in the real signal, we do support WebSockets. It's a lot harder to code the full WebSocket stack on stage. So we're just using the two simpler ones for this demo. So this is pretty straightforward. I just do dollar.ajax. I'm going to go through the connection.url plus, I think it was send. And we do connection. This is what Fowler was doing in Fiddler before. Connection ID plus connection.id. It's giving yourself some room here. Plus transport is equal to connection.transport.name. And then we need to actually send our data. So the type of request is a post. And the data is going to be an object that has a data member with some data. Data, data, data, data. Awesome. Data, data, data, data. I say data. I've been after three years in the US. I now say data. And I'm very ashamed of myself for saying that. You should be ashamed. So onReceive, this was the response that Fowler was sending back from his server code. So let me get that. This is going to be JSON. So I'm going to say JSON.parse the data. There it is again, data. And the messages that I actually care about on the response is on the M property we saw. So there was also a C property, which was the cursor. So I'm going to store that as connection.message.id. Comes from response.c. And then finally, I'm going to loop over those messages and relay them back to the user code. So let's do that. Let's say for everything in, for i in messages.length. Let's do $onConnection. I'm going to use jQuery events here to make this really simple for myself. TriggerHandler. We'll trigger a receive handler on the connection object. And into that, I'm going to pass in the current message that we're looping over. Okay. So that's my common logic done. Let's jump down now to our transport specific logic. All right. So let's start service end events. So start is asynchronous. So I'm going to use the jQuery equivalent of the task API, which is the deferred. So I'm going to vvd is equal to $.deferred. I'm going to create myself a deferred. Return that. Okay. So first thing I can do is check does the browser even support this standard. If not, if we're browsing, no, no, no, no. If not, I'm going to do feature detection. If not window.event source, which I don't get in touch since fall, then, you know, no SSC support. We'll just fail here. Just fail now for you. You have comments in your code. It helps me remember it. I have comments. I know. So I'm going to d.reject. That's going to fail that task. And then I'm just going to return shortcut return from this entire method right here. Okay. Okay. Next thing I'm going to do is actually, okay. So I'm in a browser that supports service end events. Let's go ahead and actually try and create that. So I'm going to store that on the connection. Connection.event source is equal to a new window.event source. And I need to pass in the URL plus connection ID is equal to plus connection.id plus transport. And again, you saw browser. You saw browser. You saw Fowler type this in in the browser before. And this.name. Okay. So now I have my service end events, but if it fails for whatever reason on connect, perhaps the server is down or there was something in between them that didn't want to fail. And if there was something in between them that didn't like the mime type or whatever it might be, then I need to reject that again and return d.promise again. Okay. So now we're down to the point where we have an event source. We have that stream coming from the server. So let's wire up some events so we can do something with it. So connection.event source.add event listeners is old school DOM events. There's an event for open, which tells us when it is actually started. And then there's an event for when a message comes in. So let's add that one. Okay. So when it's open, that's great. You know, we're connected. I can d.resolve. So that finishes the start method essentially, right? Because it's asynchronous. That's done. That's all we have to do there. If I get a message, well, that pretty much guarantees that it's open as well. So I'm going to resolve it again because it doesn't matter. I'm going to resolve it as idempotent. It'll only do it once anyway. And event source, even though it's a standard, does alter from browser to browser. And sometimes the message event may get raised before the open event shouldn't. But this is just the safest possible thing. So I have a message, but I know that Fowler went back up here. In his SSE code, he sent this out here. The first thing he wrote out was this init string. It's not a message. It's just a string called init. And we do that to kind of prime the streaming pump. Because previously, otherwise, all we've done is send a header. And if all you do is send a header and then not send any data, any layer between ASP.net, IS, Windows, TCP, front-end load balancer, edge cache proxy, NAT device, router, all the way back to the client can say, well, there's no body bytes yet. So I'm not even going to bother flushing the header yet. But I want everything to be flushed. I want the header to be flushed because the client-side API for event source won't raise open until that header is received. So we're just going to prime that pump by sending it out. So I need to throw that message away. So if e.data is equal to init, I'm just going to ignore it. Otherwise, this is a message I care about. So we'll say transports.underscore.onReceive. We'll pass in the connection, which was a parameter, and we'll pass in e.data. OK. So that is my service-sent events code. Let's do long polling. So how do we do long polling? So long polling is just Ajax over and over again. OK. It's all it really is. Very similar. I'm going to do d equals dollar dot deferred. And that is equal to this because I'm going to need to handle that. Return my promise, which again represents the asynchronicity, the current action of this start. And now I'm going to write one of my favorite things in JavaScript. This is an immediately executing recursive function. See it 14 times faster. Immediately executing recursive function. Immediately executing recursive function. I need a drink now. So it's not anonymous because I gave it a name, but it's not assigned to anything either. It's just a function I declared inline. Then I executed it immediately. And then without assigning it to anything, and then what we'll see in a minute is it's going to call itself. It's kind of cool. It's a function that only ever exists as an expression that's never assigned but lives forever because it calls itself over and over again. It's kind of nice. And so what is, it's just Ajax, right? So let's just do Ajax. Ajax to connection dot URL plus we want to do connection ID. Same old thing we did before. Equals. It is very dry. It is very dry. Connection dot ID. And transport equals this time that dot name. And we need the cursor on long polling because we're not streaming. Every time I come back, I need to tell the server where I was in that stream by giving them this message ID. Now in the real signal are all transports use that cursor. So even if the server sent event stream was interrupted for some reason, we have reconnect logic in the real signal are that will take that cursor, create a new event source, and append that message ID to the URL before it does that. In this example, we don't do that just for the sake of simplicity. So message ID is equal to connection dot message ID. And what do I need to do here? And your type is going to be a post and data type is going to be text. Now I'm doing that because I have common logic for the on receive, which you saw me write before. If we go up here, here's my common on receive logic and you'll note that I'm calling JSON dot parse. So I don't want jQuery to do that for me. So I'm just telling it, hey, just treat the responses text. Okay. So when that's done, run this, I'm just going to assume it works. There's no error handling in this demo. Everything just works. We don't worry about it. So when that's done, I have some data. So here's my response. Then here's my response from the server. And so I need to do a few things. First of all, if I've got a response, I know the connection is alive. So I need to resolve that deferred that was representing the start of the connection. Okay. I know I just got a message, so it must have started. Okay. So I can resolve that. Next thing I need to do is process the response. Now in curl, you saw Fowler just look at the response. Okay. So that's the equivalent here. I'm just going to say transports dot underscore dot on receive, parsing in connection and response. And then the last thing I need to do is what? What's the last line? Poll. Yeah. So there it is. There's the magic of the immediately executing recursive function. So that actually right there is long polling in the purest form. Okay. That's long polling in the purest form. Make a request. At some point you get a response. Process the response poll again. Okay. Kind of cool. The last thing I need to do though is I need to force this resolved to finish even when I haven't got a request. Oh, sorry. I haven't got a response. Because otherwise when I call dot start on this connection, the deferred that's returned will never be finished until there's a message. Now because it's Ajax, there's really no good way of knowing whether the request is good. It's really, it either succeeds or it fails sometime in the future. So I'm just going to hack it. Set time out, function at some point in the future. Just resolve that and just assume that it's fine in about 250 milliseconds. Okay. So the call start that's going to call poll which starts Ajax which is asynchronous. It's going to have 250 milliseconds later. It's going to resolve that. Now, signaler1.x does exactly this. This is actually how it works. In signaler2.0, we've done the work to remove this by ensuring that when you start even a long polling connection, we send an initialization message back from the server and we use that to determine that the server is alive and we use that to trip the deferred rather than just a timer. But for the sake of the demo, this is much easier and this is actually totally valid. It's what we do in signaler1.0 as much of a hack as it is. All right. So we scroll up. So that's all of our transport specific logic. So let's go back now to our actual signaler API and add the missing methods that we need to make it useful. So I'm going to need a start method to start my connection. I'm going to need to send method to send data over the connection. And I'm going to need a receive method to register a callback to be called when I get a message from the server. So let's fill these in in reverse order. So receive is fairly straightforward. I'm just going to say $, this. Again, I'm going to use a jQuery's eventing system for this. So on the receive event, go ahead and invoke this delegate right here, which we'll just call the callback and pass in the data. Super easy. For send, very simple as well. Just delegate this through to the current transport and call send on that, passing in a reference to itself because that transport method is shared. It's not an instance method. It's essentially a static method. It's on a singleton JavaScript object. So I'm passing in the instance data, which is the current disk from this prototype. And then lastly, I have to write the hardest method in this sample, which is the start method. So start is asynchronous. So I need to go off and actually start the connection. Now we saw Fowler write the server code where we do those negotiate request, then the long-running request and the send request. Well, we've done the send request. That's this one here. So we must have to start off with negotiate. So I'm going to get a handle on the current instance as connection. Then we're going to negotiate with the server. So I'm going to say return $.ajax. Now again, this is asynchronous. So I'm returning the deferred object through to connection.us. So I'm going to go to connection.url, plush poor man's routing, negotiate. And then when that is done, we're going to run this delegate. We're going to get a response from the server. Oops. Okay. So we get our response and I can say connection.id is equal to response.connection.id. Because back up here, if you remember in negotiate, all we did was create a GUID on an object and we just serialized it down to the wire as JSON. So that's me consuming that. Now I have a connection.id. Let's go ahead and actually connect. Now this is interesting. So I have two transports that both have a start method that is asynchronous. And what I obviously want to do is loop through the transports and call start until one of them works. But I can't just do a for loop. Why not? Why can't I just do a for loop? Because it's not C sharp. Because it's deferred. Because it's not C sharp. Thank you. In C sharp, I could just do a for loop and just use async await. And it would just work, right? But in JavaScript, I don't have async await. So I have to do it the old fashioned async loop recursive function stuff. So what I'm going to do is build myself up a list of strings, which are the transports I'm going to try. So let's do that. So I'm going to say for var key in my transports logic object. If that key is equal to an underscore, ignore it, because that's my common logic. I don't care about that one. Otherwise, I'm going to push that onto my list of supported transports. So now I have a list of supported transports. It has two members in it. I could just create an array with the names in them, but this is obviously better factored. So now I have a list. Now what I need to do is create a function that's going to recursively call itself to do this async loop. So I'm going to say function try connect, which will take an index. And the last thing I'm going to do is return the call to try connect passing in zero. Okay? So remember this thing is entirely async. Start returns from Ajax. This itself returns at deferred. So we're going to end up with a long chain of deferred objects, which eventually will finish. So we'll say var inside my try connect. Var transport name is equal to supported transports index. Get the current transport for this iteration by using its name and then grabbing it off the global transports object using its name as a key. Okay? Because that was an object. One is an array and one is an object. Now I have that. I can attempt to start the transport passing in the current connection. And then when that is done, it's either going to succeed or fail. This transport is either going to work or it's not going to work and I have to continue processing. So I'm going to return transport.start from inside try connect, which itself is being returned from inside the then delegate from my return Ajax. So here's my task chain, essentially, right? So what do I do? So if it succeeded, I'm connected. I'm done, right? So I just say connection. I'm going to store that transport away on the connection. Okay? So I'm connected. It's done. I don't have to return anything. jQuery will do the right thing. It'll execute this delegate. See there's no return. And go, okay, well, the task chain is done. Just unwind the task chain. But if it failed the second delegate, I need to move on to the next one. So return try connect index plus one. Okay? So there it is. There is the async loop inside JavaScript using a deferred callback sort of paradigm thing. So we have lots of returns of lots of deferreds and jQuery just does the right thing and ensures that all this unwinds and just finishes when it finishes. What happens if you like everything fails? So if everything fails in this demo, everything fails. Where's the base case? So there was no base case if both service and events and long polling fail with this code, this will re-enter with a two and then obviously you'll just get an argument, a range exception type thing, index out of range. And hey, that doesn't matter. In real signal, obviously we do that properly. For the purposes of this, I'm just going to assume that one of them is going to work. Okay? And I think that's it. Is that my client? 154 lines is what it should be. All right. So let's write a real client then. Let's add a file. Chat.html. Let's add input ID equals message. Input type equals button ID send value equals send. And I need somewhere to show my messages. So we'll do a UL ID equals messages. There we go. Okay. Let's add some references, so I need jQuery. I need my awesome little signal library we just wrote. And I need to write some more script. Okay. So function. When the document has finished loading, just check my time, when the document has finished loading, I'm going to go ahead and grab a connection. So we'll do vicon equals dollar dot connection to chat.ashx. It's our first real wire up now. When con, when I get something from the server, I want you to invoke this call back and I'm going to take that message and shove it into my UL. So we'll do UL and we'll do messages. Oh, thank you. You get a book. You get a book because you didn't pick it up. He did it before you did. You went to be my safety. I want to see you. Poor man's templating. That's the most awesome templating in the world. Yes, that's a cross-site scripting vulnerability. I know. I could fix it. You want me to fix it? I'll fix it. Off script. Easiest way to fix cross-site scripting. Text. Did we actually test this when I did this? I didn't do it. All right. So fix cross-site scripting now. Anyone ever done that? Taking an arbitrary string. Use jQuery to build the DOM for you via the correct APIs. So text will encode rather than using HTML, which will go, oh, it's HTML. Just put it all in. And then I just retrieve the text again. Crazy. It won't work at all, I'm sure. And then I start the connection. When that's done, I'll wire up my UI. When using Signal app, you don't want to add the interactive logic, like add up your DOM event handlers. Don't do that until the connection has started. Just like in Ajax, you don't want to wire up the click events until you know that the click handler will work successfully. That's a common mistake that people used to make in the early days of Ajax. Most people don't now, but with SignalR, we need to just make sure that people, hey, call start, then when that's done, then go and add up, then go and do this part. So I'll say, what is it? Message. Send, sorry. Send. When that is clicked, that's my button. Go ahead and try calling out to the server. So we'll do conduct send and send the current value of that text box. Message.val. Boom. I think it's, I don't know. Let's take a bet. And that's the end of the demo. Right there. Oh, we need to run it. All right. I can't look. Okay. Chat on left side of the screen. No. All right. I have a reference issue. All right. So that's online. 80, always JavaScript. Told you mine was harder. Oh. Man. No one picked that up. No books for anybody now. You're going to have to do that. I'm going to have to do that. I'm going to have to do that. No one picked that up. No books for anybody now. If I use TypeScript, maybe you would have. If I use TypeScript, I would have just typed it as any. All right. Try that again. Yeah. Woo. That's awesome. So there we go. Don't stress it. Oh, sorry. We'll uncover the race conditions. It'll all explode on screen. So there we go. That is a signal written from scratch on stage. So hopefully you have a really nice understanding of what's going on behind the scenes now, both in the surface-side code. ASPnet asynchronous is horrible. I mean, no, it's beautiful. It's horribly hard, which is why all that code was in a region. And the JavaScript code, you can see how we did the transport fallback and whatnot. So we have about five minutes for questions, and I can hand out books to the good question. Now, someone got a book because they corrected me as I was going. That was you. So you get a book straight away. Awesome. So I have 16 more books, and I would really love to give them to people who ask good questions rather than just who gets here first. So yes. I had a question. In the beginning of the demo, you showed that the script was much regenerated. Why didn't you use the script? So the question was, in the beginning, when I did the move-shape demo, I had the reference to single or slash hubs because writing hubs from scratch would take five hours. No. So I wouldn't. Actually, you can go on. You write hubs while I'm talking to them. Go on. I'll watch. So that's why. So yes, what we wrote here was the lower-level API that exists in Signal, the persistent connection API. But most people will use the higher-level hub API. But some people use both. So there was another question here. Yes. What was the connection ID? So the question was, what was the connection ID? Good question. In this version of SignalR, we don't actually use it for much. In the real version of SignalR, connection ID is used to, well, we use it for addressing this version, but we don't demo it. So the connection ID is a unique identifier for disconnection, which means that other people can send you messages directly. It also gives us something to track the connection by. So in the real SignalR, we have a background thread that monitors all the connections so that we can detect, for instance, when the client has gone away and then after a certain period of time, we'll kill the response. So rather than relying on that client-disconnected token, because we work on.NET 4.0 as well and that was only introduced in.NET 4.5. So it's used for those two purposes. So please don't kill anybody. Oh, look at that. Over here. Can you return something from the client? Can you return something from the client? The question, oh, great question. We get this a lot. The question is, can you return something from the method on the client? So in the move shape demo, I basically called a method on the client, right? Shape move. And the question is, can I return a value from that back to the server invocation? Not at all. That would be cool though, right? It would be very cool. But if you really sit down and think about what would be involved in trying to make that work, it's not something we want to put in the framework. Now, if you want to do that in your application, it's basically easy. You just have a second server side method, which is client callback and the client side code calls that back with the return value rather than having it return it as an actual return value, okay? All right, next. Yes? How many transports? How many transports? In the real signal, there are four transports. The two that we coded here, server send events and long polling, we also support forever frame, which works in IE, which gives us the streaming, HTTP streaming in IE. So you don't have to use long polling. And then we have web sockets, which is the highest fidelity. It's like the best one, which is a new standard, full duplex binary channel. It's basically TCP sockets that are established over HTTP, but it's much harder to code from scratch, okay? So we don't do that one on stage. But there are four. And we use web sockets first. And if that fails, we'll try server send events, then forever frame, then finally long polling. Oh, look at that. Yes? The question is, will we add WebRTC support? WebRTC is interesting because WebRTC is actually more about peer to peer. It's about one browser to another browser, where a signaler is much more about server to client. So we've talked about WebRTC. I don't think you'll ever see WebRTC as a transport in signaler, but you may see a WebRTC add-on that lets you use the signaler API to do peer to peer from JavaScript using WebRTC, even though the initial establishment was done by a signaler, if that makes sense, right? Yes, behind. So the question was, if we want reliable delivery of the message, does that depend on the transport? The immediate answer is that signaler is not reliable messaging. Signaler emulates a socket. So anyone who's done network programming knows that there are like seven layers, and then reliable messaging is like off the top of that layer. So if you want reliable messaging, you could do it on top of signaler, but we really emulate a socket. We do the buffering that we showed you for long polling with the cursor, but we only buffer for a very short period of time. It's really only to allow the JavaScript to run and then come back again or to deal with bumps in the network, like that last seconds, not minutes. And there's no replay, there's no idempotency. We do ensure order. We do our best to ensure order, but we don't ensure that you won't miss messages and stuff. Like if you go away for too long and then come back, you may miss a message because we overwrite on the server so you don't run out of memory. Okay? So signaler is not reliable messaging. Thank you. Who else? Anyone there? Yes? Do you feel WebSockets are getting the traction and commitment that it needs? Do I feel that WebSockets is getting the traction and commitment that it needs? From all browsers. From all browsers. So WebSockets is supported in IE10 on the desktop. Is it supported in IE10 on phone? I don't know if it is. It's supported in Windows 8 store apps in WinRT. It's supported in iOS. It's supported in Android browsers, Chrome, obviously, Opera, Safari, Firefox. The server side, most of the big hosting companies support it now. Azure Websites does not yet. It will do. Azure Cloud services and VMs do. People like App Harbor, I don't know if they do. I think they do. I'm not sure. I think it's getting better support. In order to get the best chance of having a client and a server successfully connect with WebSockets, you should use SSL because it makes the stream transparent to everybody else in between. So they can't make decisions about, oh, it's long running and it hasn't had any bytes. It doesn't know. So they suggest doing it over SSL. Yes? Is there anything you need to configure on the server in order to handle lots of clients? Yes, by default, IAS will support 5,000 concurrent connections out of the box. If you want to do more than that, you can totally change that. It's just a config value. There's about three config values that you should really change. We have a blog or Wiki article that shows you about this. Basically, your scale limitation will be memory on a single server. I've seen a server running 150,000 signal-like connections. I've seen an ASP.net server running 460,000 WebSocket connections that were active. They used a lot of RAM, like 20-odd gig, but they all worked. So Windows scales extremely well for Sockets and ASP.net scales really well if you write good code. Like everything, when you get into scale of thousands and thousands, the tiniest mistake will destroy your application very, very quickly. So you still have to test your app just like you would any big web application. But yes, you can change it. Yes? What's the future of SignalR? We're currently working on 2.0. 2.0 will be out with Visual Studio 2013 and the ASP.net WebStack Wave that's coming, WebAPI 2, MVC5, that sort of stuff. The biggest features in 2.0, we're dropping support for 4.0 on the server. It's 4.5. And a lot of performance and stability improvements. A few API improvements, not a lot, to be honest. It's mostly an engineering effort, 2.0. There's a new client. We're working on a portable class library and a Xamarin client. You'll be able to use SignalR in your iOS apps that use Xamarin. What else in 2.0? Bugs. Bugs, fixing bugs. It's all up on GitHub. So none of that is secret. Just go up to GitHub and look at our milestones and you'll see. We're out of time, but everyone's still here. So I'll answer questions. Yep. The question is what kind of production systems are using SignalR? Jabba! Yay! Jabba is our chat app that we run. TFS. So you know the TFS group room, the room collaboration feature? They just launched on TFS online. Use SignalR. Visual Studio doesn't use SignalR. Oh, it uses it? It comes with it. This thing, right? Oh, yeah, this. Yeah, that uses SignalR. There's a whole bunch of customers that I can't tell you the names of, but they're looking at using SignalR. I've already started using SignalR. Is there anyone on this side before I take one on this side? Nope. Yep. Are there any mobile clients that have been working on it? Yes. So we have a.NET client that works on Windows Phone 8 and Windows 8 Store apps. And, I mean, it works. It's the same.NET SignalR client that you use in desktop apps. We're working on the Xamarin client so that you can use it in Android and iOS apps and Mac. Apps using the client. Here's the clients here. But, I mean, there's really not anything you have to watch out for. I mean, all the clients should function equally as well. You may have some transports that don't work in some clients, but the level that you deal at, the API, should all just work the way you should expect. Sorry. All right. Any more questions? C++ coming. And we have a C++ client in development right now. Yes. So, the service bus scale-out provider only works on Windows Azure service bus. That will change when service bus 2.0 comes out, which comes out later in the year. I believe there's a preview of that coming out soon. But we require API that isn't in the Windows service bus. It's only in the Azure one. But they're going to add it in 2.0. Welcome. I'm going to support Redis and SQL server as well for scale-out if you don't have service bus. Did you add hubs? Or did you give up? Oh, oh, oh, oh. Okay. We're like three minutes over. I have four books left. Anyone? Can you do you signal off a server to server? You can, but I wouldn't recommend it. If a server is addressable, just post to it. Like, just expose an HTTP endpoint or a socket endpoint, whatever makes sense for the environment, and just post. You don't need signal off. Signal is great for when the client is not addressable. There's no way to address a browser. It has to connect to you first. That's where signal is really good. I wouldn't use signal off a server to server personally. Yes? Securing. Securing signal off. So it's a web app. It's no different. We don't do authentication in the signal layer. You do that in your web app already using cookie-based authors like you do today in ASP.NET. And then by the time you call start, the forms off ticket will flow up or whatever ticket you're using. And then we flow that user context, that principle, all the way up to the hub API. And then we give you an authorized attribute just like MVC. Yep. Secure the client in what way. I'm not really sure what you mean by that. Use SSL. If you want to protect against men in the middle, use SSL. One good catch. Sorry, nearly took your head off. Two more. No? Okay, first come, first serve. These two down here. Awesome. Thank you, everybody. Sure. Sorry, serve again.
The real-time web is here. You’ve seen the demos before; synchronized moving shapes across browsers and Windows apps, but now you want to *really* understand what’s going on behind the curtain. What better way than to watch one of the SignalR co-creators build a SignalR-like framework from scratch on stage. Knowing how it works will help you use it better and might just prevent you making mistakes based on incorrect assumptions. Know your tools and learn the magic behind SignalR.